00:00:00.001 Started by upstream project "autotest-per-patch" build number 132024 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:10.474 The recommended git tool is: git 00:00:10.474 using credential 00000000-0000-0000-0000-000000000002 00:00:10.476 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:10.487 Fetching changes from the remote Git repository 00:00:10.489 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.501 Using shallow fetch with depth 1 00:00:10.501 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:10.501 > git --version # timeout=10 00:00:10.512 > git --version # 'git version 2.39.2' 00:00:10.512 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.523 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.523 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:14.126 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.138 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.152 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:14.152 > git config core.sparsecheckout # timeout=10 00:00:14.165 > git read-tree -mu HEAD # timeout=10 00:00:14.183 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:14.211 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:14.211 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:14.308 [Pipeline] Start of Pipeline 00:00:14.328 [Pipeline] library 00:00:14.329 Loading library shm_lib@master 00:00:14.329 Library shm_lib@master is cached. Copying from home. 00:00:14.343 [Pipeline] node 00:00:14.348 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:00:14.350 [Pipeline] { 00:00:14.357 [Pipeline] catchError 00:00:14.359 [Pipeline] { 00:00:14.370 [Pipeline] wrap 00:00:14.378 [Pipeline] { 00:00:14.383 [Pipeline] stage 00:00:14.384 [Pipeline] { (Prologue) 00:00:14.396 [Pipeline] echo 00:00:14.397 Node: VM-host-SM4 00:00:14.401 [Pipeline] cleanWs 00:00:14.409 [WS-CLEANUP] Deleting project workspace... 00:00:14.409 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.414 [WS-CLEANUP] done 00:00:14.665 [Pipeline] setCustomBuildProperty 00:00:14.736 [Pipeline] httpRequest 00:00:15.386 [Pipeline] echo 00:00:15.387 Sorcerer 10.211.164.101 is alive 00:00:15.396 [Pipeline] retry 00:00:15.398 [Pipeline] { 00:00:15.410 [Pipeline] httpRequest 00:00:15.414 HttpMethod: GET 00:00:15.415 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:15.415 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:15.438 Response Code: HTTP/1.1 200 OK 00:00:15.439 Success: Status code 200 is in the accepted range: 200,404 00:00:15.439 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:39.584 [Pipeline] } 00:00:39.601 [Pipeline] // retry 00:00:39.607 [Pipeline] sh 00:00:39.907 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:39.920 [Pipeline] httpRequest 00:00:40.958 [Pipeline] echo 00:00:40.960 Sorcerer 10.211.164.101 is alive 00:00:40.968 [Pipeline] retry 00:00:40.969 [Pipeline] { 00:00:40.984 [Pipeline] httpRequest 00:00:40.988 HttpMethod: GET 00:00:40.989 URL: http://10.211.164.101/packages/spdk_1ca8338609c29f567d34186eaebac19678707e5f.tar.gz 00:00:40.989 Sending request to url: http://10.211.164.101/packages/spdk_1ca8338609c29f567d34186eaebac19678707e5f.tar.gz 00:00:40.993 Response Code: HTTP/1.1 200 OK 00:00:40.993 Success: Status code 200 is in the accepted range: 200,404 00:00:40.994 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_1ca8338609c29f567d34186eaebac19678707e5f.tar.gz 00:05:19.164 [Pipeline] } 00:05:19.182 [Pipeline] // retry 00:05:19.189 [Pipeline] sh 00:05:19.468 + tar --no-same-owner -xf spdk_1ca8338609c29f567d34186eaebac19678707e5f.tar.gz 00:05:22.771 [Pipeline] sh 00:05:23.051 + git -C spdk log --oneline -n5 00:05:23.051 1ca833860 nvme/rdma: Add likely/unlikely to IO path 00:05:23.051 13fe09815 nvme/rdma: Factor our contig request preparation 00:05:23.051 83708fda4 lib/rdma_provider: Allow to set data_transfer cb 00:05:23.051 a46541aa1 nvme/rdma: Allocate memory domain in rdma provider 00:05:23.051 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:05:23.068 [Pipeline] writeFile 00:05:23.081 [Pipeline] sh 00:05:23.361 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:23.372 [Pipeline] sh 00:05:23.659 + cat autorun-spdk.conf 00:05:23.660 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:23.660 SPDK_TEST_NVME=1 00:05:23.660 SPDK_TEST_FTL=1 00:05:23.660 SPDK_TEST_ISAL=1 00:05:23.660 SPDK_RUN_ASAN=1 00:05:23.660 SPDK_RUN_UBSAN=1 00:05:23.660 SPDK_TEST_XNVME=1 00:05:23.660 SPDK_TEST_NVME_FDP=1 00:05:23.660 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:23.681 RUN_NIGHTLY=0 00:05:23.682 [Pipeline] } 00:05:23.695 [Pipeline] // stage 00:05:23.708 [Pipeline] stage 00:05:23.710 [Pipeline] { (Run VM) 00:05:23.721 [Pipeline] sh 00:05:24.001 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:24.001 + echo 'Start stage prepare_nvme.sh' 00:05:24.001 Start stage prepare_nvme.sh 00:05:24.001 + [[ -n 3 ]] 00:05:24.001 + disk_prefix=ex3 00:05:24.001 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:24.001 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:24.001 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:24.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:24.001 ++ SPDK_TEST_NVME=1 00:05:24.001 ++ SPDK_TEST_FTL=1 00:05:24.001 ++ SPDK_TEST_ISAL=1 00:05:24.001 ++ SPDK_RUN_ASAN=1 00:05:24.001 ++ SPDK_RUN_UBSAN=1 00:05:24.001 ++ SPDK_TEST_XNVME=1 00:05:24.001 ++ SPDK_TEST_NVME_FDP=1 00:05:24.001 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:24.001 ++ RUN_NIGHTLY=0 00:05:24.001 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:24.001 + nvme_files=() 00:05:24.001 + declare -A nvme_files 00:05:24.001 + backend_dir=/var/lib/libvirt/images/backends 00:05:24.001 + nvme_files['nvme.img']=5G 00:05:24.001 + nvme_files['nvme-cmb.img']=5G 00:05:24.001 + nvme_files['nvme-multi0.img']=4G 00:05:24.001 + nvme_files['nvme-multi1.img']=4G 00:05:24.001 + nvme_files['nvme-multi2.img']=4G 00:05:24.001 + nvme_files['nvme-openstack.img']=8G 00:05:24.001 + nvme_files['nvme-zns.img']=5G 00:05:24.001 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:24.001 + (( SPDK_TEST_FTL == 1 )) 00:05:24.001 + nvme_files["nvme-ftl.img"]=6G 00:05:24.001 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:24.001 + nvme_files["nvme-fdp.img"]=1G 00:05:24.001 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:24.001 + for nvme in "${!nvme_files[@]}" 00:05:24.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:05:24.001 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:24.001 + for nvme in "${!nvme_files[@]}" 00:05:24.001 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:05:24.001 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:24.001 + for nvme in "${!nvme_files[@]}" 00:05:24.002 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:05:24.002 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:24.002 + for nvme in "${!nvme_files[@]}" 00:05:24.002 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:05:24.002 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:24.002 + for nvme in "${!nvme_files[@]}" 00:05:24.002 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:05:24.002 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:24.002 + for nvme in "${!nvme_files[@]}" 00:05:24.002 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:05:24.260 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:24.260 + for nvme in "${!nvme_files[@]}" 00:05:24.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:05:24.260 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:24.260 + for nvme in "${!nvme_files[@]}" 00:05:24.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:05:24.260 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:24.260 + for nvme in "${!nvme_files[@]}" 00:05:24.260 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:05:24.260 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:24.518 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:05:24.518 + echo 'End stage prepare_nvme.sh' 00:05:24.518 End stage prepare_nvme.sh 00:05:24.529 [Pipeline] sh 00:05:24.811 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:24.811 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:24.811 00:05:24.811 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:24.811 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:24.811 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:24.811 HELP=0 00:05:24.811 DRY_RUN=0 00:05:24.811 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:05:24.811 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:24.811 NVME_AUTO_CREATE=0 00:05:24.811 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:05:24.811 NVME_CMB=,,,, 00:05:24.811 NVME_PMR=,,,, 00:05:24.811 NVME_ZNS=,,,, 00:05:24.811 NVME_MS=true,,,, 00:05:24.811 NVME_FDP=,,,on, 00:05:24.811 SPDK_VAGRANT_DISTRO=fedora39 00:05:24.811 SPDK_VAGRANT_VMCPU=10 00:05:24.811 SPDK_VAGRANT_VMRAM=12288 00:05:24.811 SPDK_VAGRANT_PROVIDER=libvirt 00:05:24.811 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:24.811 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:24.811 SPDK_OPENSTACK_NETWORK=0 00:05:24.811 VAGRANT_PACKAGE_BOX=0 00:05:24.811 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:24.811 FORCE_DISTRO=true 00:05:24.811 VAGRANT_BOX_VERSION= 00:05:24.811 EXTRA_VAGRANTFILES= 00:05:24.811 NIC_MODEL=e1000 00:05:24.811 00:05:24.811 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:24.811 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:28.171 Bringing machine 'default' up with 'libvirt' provider... 00:05:28.171 ==> default: Creating image (snapshot of base box volume). 00:05:28.429 ==> default: Creating domain with the following settings... 00:05:28.429 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730727554_a55c0b25f4710a32ea6b 00:05:28.429 ==> default: -- Domain type: kvm 00:05:28.429 ==> default: -- Cpus: 10 00:05:28.429 ==> default: -- Feature: acpi 00:05:28.429 ==> default: -- Feature: apic 00:05:28.429 ==> default: -- Feature: pae 00:05:28.429 ==> default: -- Memory: 12288M 00:05:28.429 ==> default: -- Memory Backing: hugepages: 00:05:28.429 ==> default: -- Management MAC: 00:05:28.429 ==> default: -- Loader: 00:05:28.429 ==> default: -- Nvram: 00:05:28.429 ==> default: -- Base box: spdk/fedora39 00:05:28.429 ==> default: -- Storage pool: default 00:05:28.429 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730727554_a55c0b25f4710a32ea6b.img (20G) 00:05:28.429 ==> default: -- Volume Cache: default 00:05:28.429 ==> default: -- Kernel: 00:05:28.429 ==> default: -- Initrd: 00:05:28.429 ==> default: -- Graphics Type: vnc 00:05:28.429 ==> default: -- Graphics Port: -1 00:05:28.429 ==> default: -- Graphics IP: 127.0.0.1 00:05:28.429 ==> default: -- Graphics Password: Not defined 00:05:28.429 ==> default: -- Video Type: cirrus 00:05:28.429 ==> default: -- Video VRAM: 9216 00:05:28.429 ==> default: -- Sound Type: 00:05:28.429 ==> default: -- Keymap: en-us 00:05:28.430 ==> default: -- TPM Path: 00:05:28.430 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:28.430 ==> default: -- Command line args: 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:28.430 ==> default: -> value=-drive, 00:05:28.430 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:28.430 ==> default: -> value=-device, 00:05:28.430 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:28.689 ==> default: Creating shared folders metadata... 00:05:28.689 ==> default: Starting domain. 00:05:31.217 ==> default: Waiting for domain to get an IP address... 00:05:49.341 ==> default: Waiting for SSH to become available... 00:05:50.715 ==> default: Configuring and enabling network interfaces... 00:05:56.015 default: SSH address: 192.168.121.217:22 00:05:56.015 default: SSH username: vagrant 00:05:56.015 default: SSH auth method: private key 00:05:57.927 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:07.898 ==> default: Mounting SSHFS shared folder... 00:06:07.898 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:07.898 ==> default: Checking Mount.. 00:06:09.272 ==> default: Folder Successfully Mounted! 00:06:09.272 ==> default: Running provisioner: file... 00:06:09.838 default: ~/.gitconfig => .gitconfig 00:06:10.405 00:06:10.405 SUCCESS! 00:06:10.405 00:06:10.405 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:10.405 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:10.405 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:10.405 00:06:10.413 [Pipeline] } 00:06:10.428 [Pipeline] // stage 00:06:10.436 [Pipeline] dir 00:06:10.437 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:10.439 [Pipeline] { 00:06:10.451 [Pipeline] catchError 00:06:10.453 [Pipeline] { 00:06:10.466 [Pipeline] sh 00:06:10.741 + vagrant ssh-config --host vagrant 00:06:10.742 + sed -ne /^Host/,$p 00:06:10.742 + tee ssh_conf 00:06:14.023 Host vagrant 00:06:14.023 HostName 192.168.121.217 00:06:14.023 User vagrant 00:06:14.023 Port 22 00:06:14.023 UserKnownHostsFile /dev/null 00:06:14.023 StrictHostKeyChecking no 00:06:14.023 PasswordAuthentication no 00:06:14.023 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:14.023 IdentitiesOnly yes 00:06:14.023 LogLevel FATAL 00:06:14.023 ForwardAgent yes 00:06:14.023 ForwardX11 yes 00:06:14.023 00:06:14.034 [Pipeline] withEnv 00:06:14.036 [Pipeline] { 00:06:14.050 [Pipeline] sh 00:06:14.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:14.329 source /etc/os-release 00:06:14.329 [[ -e /image.version ]] && img=$(< /image.version) 00:06:14.329 # Minimal, systemd-like check. 00:06:14.329 if [[ -e /.dockerenv ]]; then 00:06:14.329 # Clear garbage from the node's name: 00:06:14.329 # agt-er_autotest_547-896 -> autotest_547-896 00:06:14.329 # $HOSTNAME is the actual container id 00:06:14.329 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:14.329 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:14.329 # We can assume this is a mount from a host where container is running, 00:06:14.329 # so fetch its hostname to easily identify the target swarm worker. 00:06:14.329 container="$(< /etc/hostname) ($agent)" 00:06:14.329 else 00:06:14.329 # Fallback 00:06:14.329 container=$agent 00:06:14.329 fi 00:06:14.329 fi 00:06:14.329 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:14.329 00:06:14.597 [Pipeline] } 00:06:14.613 [Pipeline] // withEnv 00:06:14.621 [Pipeline] setCustomBuildProperty 00:06:14.635 [Pipeline] stage 00:06:14.637 [Pipeline] { (Tests) 00:06:14.652 [Pipeline] sh 00:06:14.930 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:15.237 [Pipeline] sh 00:06:15.537 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:15.811 [Pipeline] timeout 00:06:15.811 Timeout set to expire in 50 min 00:06:15.813 [Pipeline] { 00:06:15.827 [Pipeline] sh 00:06:16.106 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:16.672 HEAD is now at 1ca833860 nvme/rdma: Add likely/unlikely to IO path 00:06:16.684 [Pipeline] sh 00:06:16.995 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:17.266 [Pipeline] sh 00:06:17.544 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:17.818 [Pipeline] sh 00:06:18.097 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:18.356 ++ readlink -f spdk_repo 00:06:18.356 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:18.356 + [[ -n /home/vagrant/spdk_repo ]] 00:06:18.356 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:18.356 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:18.356 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:18.356 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:18.356 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:18.356 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:18.356 + cd /home/vagrant/spdk_repo 00:06:18.356 + source /etc/os-release 00:06:18.356 ++ NAME='Fedora Linux' 00:06:18.356 ++ VERSION='39 (Cloud Edition)' 00:06:18.356 ++ ID=fedora 00:06:18.356 ++ VERSION_ID=39 00:06:18.356 ++ VERSION_CODENAME= 00:06:18.356 ++ PLATFORM_ID=platform:f39 00:06:18.356 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:18.356 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:18.356 ++ LOGO=fedora-logo-icon 00:06:18.356 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:18.356 ++ HOME_URL=https://fedoraproject.org/ 00:06:18.356 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:18.356 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:18.356 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:18.356 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:18.356 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:18.356 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:18.356 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:18.356 ++ SUPPORT_END=2024-11-12 00:06:18.356 ++ VARIANT='Cloud Edition' 00:06:18.356 ++ VARIANT_ID=cloud 00:06:18.356 + uname -a 00:06:18.356 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:18.356 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:18.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.911 Hugepages 00:06:18.911 node hugesize free / total 00:06:18.911 node0 1048576kB 0 / 0 00:06:18.911 node0 2048kB 0 / 0 00:06:18.911 00:06:18.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:18.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:19.168 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:19.168 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:19.168 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:19.168 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:19.168 + rm -f /tmp/spdk-ld-path 00:06:19.168 + source autorun-spdk.conf 00:06:19.168 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:19.168 ++ SPDK_TEST_NVME=1 00:06:19.168 ++ SPDK_TEST_FTL=1 00:06:19.168 ++ SPDK_TEST_ISAL=1 00:06:19.168 ++ SPDK_RUN_ASAN=1 00:06:19.168 ++ SPDK_RUN_UBSAN=1 00:06:19.168 ++ SPDK_TEST_XNVME=1 00:06:19.168 ++ SPDK_TEST_NVME_FDP=1 00:06:19.168 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:19.168 ++ RUN_NIGHTLY=0 00:06:19.168 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:19.168 + [[ -n '' ]] 00:06:19.168 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:19.168 + for M in /var/spdk/build-*-manifest.txt 00:06:19.168 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:19.168 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:19.168 + for M in /var/spdk/build-*-manifest.txt 00:06:19.168 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:19.168 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:19.168 + for M in /var/spdk/build-*-manifest.txt 00:06:19.168 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:19.168 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:19.168 ++ uname 00:06:19.168 + [[ Linux == \L\i\n\u\x ]] 00:06:19.168 + sudo dmesg -T 00:06:19.168 + sudo dmesg --clear 00:06:19.168 + dmesg_pid=5303 00:06:19.168 + [[ Fedora Linux == FreeBSD ]] 00:06:19.168 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:19.168 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:19.168 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:19.168 + [[ -x /usr/src/fio-static/fio ]] 00:06:19.168 + sudo dmesg -Tw 00:06:19.168 + export FIO_BIN=/usr/src/fio-static/fio 00:06:19.168 + FIO_BIN=/usr/src/fio-static/fio 00:06:19.168 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:19.168 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:19.168 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:19.168 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:19.168 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:19.168 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:19.168 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:19.168 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:19.168 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:19.427 13:40:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:19.427 13:40:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:19.427 13:40:06 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:19.427 13:40:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:19.427 13:40:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:19.427 13:40:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:19.427 13:40:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.427 13:40:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:19.427 13:40:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:19.427 13:40:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.427 13:40:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.427 13:40:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.427 13:40:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.427 13:40:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.427 13:40:06 -- paths/export.sh@5 -- $ export PATH 00:06:19.427 13:40:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.427 13:40:06 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:19.427 13:40:06 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:19.427 13:40:06 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730727606.XXXXXX 00:06:19.427 13:40:06 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730727606.DjLFaO 00:06:19.427 13:40:06 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:19.427 13:40:06 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:19.427 13:40:06 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:19.427 13:40:06 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:19.427 13:40:06 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:19.427 13:40:06 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:19.427 13:40:06 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:19.427 13:40:06 -- common/autotest_common.sh@10 -- $ set +x 00:06:19.427 13:40:06 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:19.427 13:40:06 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:19.427 13:40:06 -- pm/common@17 -- $ local monitor 00:06:19.427 13:40:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.427 13:40:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.427 13:40:06 -- pm/common@25 -- $ sleep 1 00:06:19.427 13:40:06 -- pm/common@21 -- $ date +%s 00:06:19.427 13:40:06 -- pm/common@21 -- $ date +%s 00:06:19.427 13:40:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730727606 00:06:19.427 13:40:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730727606 00:06:19.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730727606_collect-vmstat.pm.log 00:06:19.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730727606_collect-cpu-load.pm.log 00:06:20.363 13:40:07 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:20.363 13:40:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:20.363 13:40:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:20.363 13:40:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:20.363 13:40:07 -- spdk/autobuild.sh@16 -- $ date -u 00:06:20.363 Mon Nov 4 01:40:07 PM UTC 2024 00:06:20.363 13:40:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:20.363 v25.01-pre-162-g1ca833860 00:06:20.363 13:40:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:20.363 13:40:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:20.363 13:40:07 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:20.363 13:40:07 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:20.363 13:40:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.363 ************************************ 00:06:20.363 START TEST asan 00:06:20.363 ************************************ 00:06:20.363 using asan 00:06:20.363 13:40:07 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:06:20.363 00:06:20.363 real 0m0.000s 00:06:20.363 user 0m0.000s 00:06:20.363 sys 0m0.000s 00:06:20.363 13:40:07 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:20.363 13:40:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:20.363 ************************************ 00:06:20.363 END TEST asan 00:06:20.363 ************************************ 00:06:20.622 13:40:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:20.622 13:40:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:20.622 13:40:07 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:20.622 13:40:07 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:20.622 13:40:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.622 ************************************ 00:06:20.622 START TEST ubsan 00:06:20.622 ************************************ 00:06:20.622 using ubsan 00:06:20.622 13:40:07 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:06:20.622 00:06:20.622 real 0m0.000s 00:06:20.622 user 0m0.000s 00:06:20.622 sys 0m0.000s 00:06:20.622 13:40:07 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:20.622 13:40:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:20.622 ************************************ 00:06:20.622 END TEST ubsan 00:06:20.622 ************************************ 00:06:20.622 13:40:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:20.622 13:40:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:20.622 13:40:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:20.622 13:40:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:20.622 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:20.622 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:21.187 Using 'verbs' RDMA provider 00:06:34.813 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:49.732 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:49.732 Creating mk/config.mk...done. 00:06:49.732 Creating mk/cc.flags.mk...done. 00:06:49.732 Type 'make' to build. 00:06:49.732 13:40:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:49.732 13:40:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:49.732 13:40:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:49.732 13:40:36 -- common/autotest_common.sh@10 -- $ set +x 00:06:49.732 ************************************ 00:06:49.732 START TEST make 00:06:49.732 ************************************ 00:06:49.732 13:40:36 make -- common/autotest_common.sh@1127 -- $ make -j10 00:06:49.732 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:49.732 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:49.732 meson setup builddir \ 00:06:49.732 -Dwith-libaio=enabled \ 00:06:49.732 -Dwith-liburing=enabled \ 00:06:49.732 -Dwith-libvfn=disabled \ 00:06:49.732 -Dwith-spdk=disabled \ 00:06:49.732 -Dexamples=false \ 00:06:49.732 -Dtests=false \ 00:06:49.732 -Dtools=false && \ 00:06:49.732 meson compile -C builddir && \ 00:06:49.732 cd -) 00:06:49.732 make[1]: Nothing to be done for 'all'. 00:06:53.934 The Meson build system 00:06:53.934 Version: 1.5.0 00:06:53.934 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:53.934 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:53.934 Build type: native build 00:06:53.934 Project name: xnvme 00:06:53.934 Project version: 0.7.5 00:06:53.934 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:53.934 C linker for the host machine: cc ld.bfd 2.40-14 00:06:53.934 Host machine cpu family: x86_64 00:06:53.934 Host machine cpu: x86_64 00:06:53.934 Message: host_machine.system: linux 00:06:53.934 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:53.934 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:53.934 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:53.934 Run-time dependency threads found: YES 00:06:53.934 Has header "setupapi.h" : NO 00:06:53.934 Has header "linux/blkzoned.h" : YES 00:06:53.934 Has header "linux/blkzoned.h" : YES (cached) 00:06:53.934 Has header "libaio.h" : YES 00:06:53.934 Library aio found: YES 00:06:53.934 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:53.934 Run-time dependency liburing found: YES 2.2 00:06:53.934 Dependency libvfn skipped: feature with-libvfn disabled 00:06:53.934 Found CMake: /usr/bin/cmake (3.27.7) 00:06:53.934 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:53.934 Subproject spdk : skipped: feature with-spdk disabled 00:06:53.934 Run-time dependency appleframeworks found: NO (tried framework) 00:06:53.934 Run-time dependency appleframeworks found: NO (tried framework) 00:06:53.934 Library rt found: YES 00:06:53.934 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:53.934 Configuring xnvme_config.h using configuration 00:06:53.934 Configuring xnvme.spec using configuration 00:06:53.934 Run-time dependency bash-completion found: YES 2.11 00:06:53.934 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:53.934 Program cp found: YES (/usr/bin/cp) 00:06:53.934 Build targets in project: 3 00:06:53.934 00:06:53.934 xnvme 0.7.5 00:06:53.934 00:06:53.934 Subprojects 00:06:53.934 spdk : NO Feature 'with-spdk' disabled 00:06:53.934 00:06:53.934 User defined options 00:06:53.934 examples : false 00:06:53.934 tests : false 00:06:53.934 tools : false 00:06:53.934 with-libaio : enabled 00:06:53.934 with-liburing: enabled 00:06:53.934 with-libvfn : disabled 00:06:53.934 with-spdk : disabled 00:06:53.934 00:06:53.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:53.934 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:53.934 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:53.935 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:53.935 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:53.935 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:53.935 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:53.935 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:53.935 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:53.935 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:53.935 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:53.935 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:53.935 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:54.203 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:54.203 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:54.203 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:54.203 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:54.203 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:54.203 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:54.203 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:54.203 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:54.203 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:54.203 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:54.203 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:54.203 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:54.203 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:54.203 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:54.203 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:54.463 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:54.463 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:54.463 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:54.463 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:54.463 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:54.463 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:54.463 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:54.463 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:54.463 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:54.463 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:54.463 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:54.463 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:54.463 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:54.463 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:54.463 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:54.463 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:54.463 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:54.463 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:54.463 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:54.463 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:54.463 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:54.463 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:54.463 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:54.463 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:54.463 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:54.463 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:54.721 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:54.721 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:54.721 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:54.721 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:54.721 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:54.721 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:54.721 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:54.721 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:54.721 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:55.020 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:55.020 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:55.020 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:55.020 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:55.020 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:55.020 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:55.020 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:55.020 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:55.020 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:55.020 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:55.277 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:55.277 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:55.841 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:55.841 [75/76] Linking static target lib/libxnvme.a 00:06:55.841 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:55.841 INFO: autodetecting backend as ninja 00:06:55.841 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:56.098 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:06.059 The Meson build system 00:07:06.059 Version: 1.5.0 00:07:06.059 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:06.059 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:06.059 Build type: native build 00:07:06.059 Program cat found: YES (/usr/bin/cat) 00:07:06.059 Project name: DPDK 00:07:06.059 Project version: 24.03.0 00:07:06.059 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:06.059 C linker for the host machine: cc ld.bfd 2.40-14 00:07:06.059 Host machine cpu family: x86_64 00:07:06.059 Host machine cpu: x86_64 00:07:06.059 Message: ## Building in Developer Mode ## 00:07:06.059 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:06.059 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:06.059 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:06.059 Program python3 found: YES (/usr/bin/python3) 00:07:06.059 Program cat found: YES (/usr/bin/cat) 00:07:06.059 Compiler for C supports arguments -march=native: YES 00:07:06.059 Checking for size of "void *" : 8 00:07:06.059 Checking for size of "void *" : 8 (cached) 00:07:06.059 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:06.059 Library m found: YES 00:07:06.059 Library numa found: YES 00:07:06.059 Has header "numaif.h" : YES 00:07:06.059 Library fdt found: NO 00:07:06.059 Library execinfo found: NO 00:07:06.059 Has header "execinfo.h" : YES 00:07:06.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:06.059 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:06.059 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:06.059 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:06.059 Run-time dependency openssl found: YES 3.1.1 00:07:06.059 Run-time dependency libpcap found: YES 1.10.4 00:07:06.059 Has header "pcap.h" with dependency libpcap: YES 00:07:06.059 Compiler for C supports arguments -Wcast-qual: YES 00:07:06.059 Compiler for C supports arguments -Wdeprecated: YES 00:07:06.059 Compiler for C supports arguments -Wformat: YES 00:07:06.059 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:06.059 Compiler for C supports arguments -Wformat-security: NO 00:07:06.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:06.059 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:06.059 Compiler for C supports arguments -Wnested-externs: YES 00:07:06.059 Compiler for C supports arguments -Wold-style-definition: YES 00:07:06.059 Compiler for C supports arguments -Wpointer-arith: YES 00:07:06.059 Compiler for C supports arguments -Wsign-compare: YES 00:07:06.059 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:06.059 Compiler for C supports arguments -Wundef: YES 00:07:06.059 Compiler for C supports arguments -Wwrite-strings: YES 00:07:06.059 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:06.059 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:06.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:06.059 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:06.059 Program objdump found: YES (/usr/bin/objdump) 00:07:06.059 Compiler for C supports arguments -mavx512f: YES 00:07:06.059 Checking if "AVX512 checking" compiles: YES 00:07:06.059 Fetching value of define "__SSE4_2__" : 1 00:07:06.059 Fetching value of define "__AES__" : 1 00:07:06.059 Fetching value of define "__AVX__" : 1 00:07:06.059 Fetching value of define "__AVX2__" : 1 00:07:06.059 Fetching value of define "__AVX512BW__" : 1 00:07:06.059 Fetching value of define "__AVX512CD__" : 1 00:07:06.059 Fetching value of define "__AVX512DQ__" : 1 00:07:06.059 Fetching value of define "__AVX512F__" : 1 00:07:06.059 Fetching value of define "__AVX512VL__" : 1 00:07:06.059 Fetching value of define "__PCLMUL__" : 1 00:07:06.060 Fetching value of define "__RDRND__" : 1 00:07:06.060 Fetching value of define "__RDSEED__" : 1 00:07:06.060 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:06.060 Fetching value of define "__znver1__" : (undefined) 00:07:06.060 Fetching value of define "__znver2__" : (undefined) 00:07:06.060 Fetching value of define "__znver3__" : (undefined) 00:07:06.060 Fetching value of define "__znver4__" : (undefined) 00:07:06.060 Library asan found: YES 00:07:06.060 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:06.060 Message: lib/log: Defining dependency "log" 00:07:06.060 Message: lib/kvargs: Defining dependency "kvargs" 00:07:06.060 Message: lib/telemetry: Defining dependency "telemetry" 00:07:06.060 Library rt found: YES 00:07:06.060 Checking for function "getentropy" : NO 00:07:06.060 Message: lib/eal: Defining dependency "eal" 00:07:06.060 Message: lib/ring: Defining dependency "ring" 00:07:06.060 Message: lib/rcu: Defining dependency "rcu" 00:07:06.060 Message: lib/mempool: Defining dependency "mempool" 00:07:06.060 Message: lib/mbuf: Defining dependency "mbuf" 00:07:06.060 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:06.060 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:06.060 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:06.060 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:06.060 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:06.060 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:06.060 Compiler for C supports arguments -mpclmul: YES 00:07:06.060 Compiler for C supports arguments -maes: YES 00:07:06.060 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:06.060 Compiler for C supports arguments -mavx512bw: YES 00:07:06.060 Compiler for C supports arguments -mavx512dq: YES 00:07:06.060 Compiler for C supports arguments -mavx512vl: YES 00:07:06.060 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:06.060 Compiler for C supports arguments -mavx2: YES 00:07:06.060 Compiler for C supports arguments -mavx: YES 00:07:06.060 Message: lib/net: Defining dependency "net" 00:07:06.060 Message: lib/meter: Defining dependency "meter" 00:07:06.060 Message: lib/ethdev: Defining dependency "ethdev" 00:07:06.060 Message: lib/pci: Defining dependency "pci" 00:07:06.060 Message: lib/cmdline: Defining dependency "cmdline" 00:07:06.060 Message: lib/hash: Defining dependency "hash" 00:07:06.060 Message: lib/timer: Defining dependency "timer" 00:07:06.060 Message: lib/compressdev: Defining dependency "compressdev" 00:07:06.060 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:06.060 Message: lib/dmadev: Defining dependency "dmadev" 00:07:06.060 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:06.060 Message: lib/power: Defining dependency "power" 00:07:06.060 Message: lib/reorder: Defining dependency "reorder" 00:07:06.060 Message: lib/security: Defining dependency "security" 00:07:06.060 Has header "linux/userfaultfd.h" : YES 00:07:06.060 Has header "linux/vduse.h" : YES 00:07:06.060 Message: lib/vhost: Defining dependency "vhost" 00:07:06.060 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:06.060 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:06.060 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:06.060 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:06.060 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:06.060 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:06.060 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:06.060 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:06.060 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:06.060 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:06.060 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:06.060 Configuring doxy-api-html.conf using configuration 00:07:06.060 Configuring doxy-api-man.conf using configuration 00:07:06.060 Program mandb found: YES (/usr/bin/mandb) 00:07:06.060 Program sphinx-build found: NO 00:07:06.060 Configuring rte_build_config.h using configuration 00:07:06.060 Message: 00:07:06.060 ================= 00:07:06.060 Applications Enabled 00:07:06.060 ================= 00:07:06.060 00:07:06.060 apps: 00:07:06.060 00:07:06.060 00:07:06.060 Message: 00:07:06.060 ================= 00:07:06.060 Libraries Enabled 00:07:06.060 ================= 00:07:06.060 00:07:06.060 libs: 00:07:06.060 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:06.060 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:06.060 cryptodev, dmadev, power, reorder, security, vhost, 00:07:06.060 00:07:06.060 Message: 00:07:06.060 =============== 00:07:06.060 Drivers Enabled 00:07:06.060 =============== 00:07:06.060 00:07:06.060 common: 00:07:06.060 00:07:06.060 bus: 00:07:06.060 pci, vdev, 00:07:06.060 mempool: 00:07:06.060 ring, 00:07:06.060 dma: 00:07:06.060 00:07:06.060 net: 00:07:06.060 00:07:06.060 crypto: 00:07:06.060 00:07:06.060 compress: 00:07:06.060 00:07:06.060 vdpa: 00:07:06.060 00:07:06.060 00:07:06.060 Message: 00:07:06.060 ================= 00:07:06.060 Content Skipped 00:07:06.060 ================= 00:07:06.060 00:07:06.060 apps: 00:07:06.060 dumpcap: explicitly disabled via build config 00:07:06.060 graph: explicitly disabled via build config 00:07:06.060 pdump: explicitly disabled via build config 00:07:06.060 proc-info: explicitly disabled via build config 00:07:06.060 test-acl: explicitly disabled via build config 00:07:06.060 test-bbdev: explicitly disabled via build config 00:07:06.060 test-cmdline: explicitly disabled via build config 00:07:06.060 test-compress-perf: explicitly disabled via build config 00:07:06.060 test-crypto-perf: explicitly disabled via build config 00:07:06.060 test-dma-perf: explicitly disabled via build config 00:07:06.060 test-eventdev: explicitly disabled via build config 00:07:06.060 test-fib: explicitly disabled via build config 00:07:06.060 test-flow-perf: explicitly disabled via build config 00:07:06.060 test-gpudev: explicitly disabled via build config 00:07:06.060 test-mldev: explicitly disabled via build config 00:07:06.060 test-pipeline: explicitly disabled via build config 00:07:06.060 test-pmd: explicitly disabled via build config 00:07:06.060 test-regex: explicitly disabled via build config 00:07:06.060 test-sad: explicitly disabled via build config 00:07:06.060 test-security-perf: explicitly disabled via build config 00:07:06.060 00:07:06.060 libs: 00:07:06.060 argparse: explicitly disabled via build config 00:07:06.060 metrics: explicitly disabled via build config 00:07:06.060 acl: explicitly disabled via build config 00:07:06.060 bbdev: explicitly disabled via build config 00:07:06.060 bitratestats: explicitly disabled via build config 00:07:06.060 bpf: explicitly disabled via build config 00:07:06.060 cfgfile: explicitly disabled via build config 00:07:06.060 distributor: explicitly disabled via build config 00:07:06.060 efd: explicitly disabled via build config 00:07:06.060 eventdev: explicitly disabled via build config 00:07:06.060 dispatcher: explicitly disabled via build config 00:07:06.060 gpudev: explicitly disabled via build config 00:07:06.060 gro: explicitly disabled via build config 00:07:06.060 gso: explicitly disabled via build config 00:07:06.060 ip_frag: explicitly disabled via build config 00:07:06.060 jobstats: explicitly disabled via build config 00:07:06.060 latencystats: explicitly disabled via build config 00:07:06.060 lpm: explicitly disabled via build config 00:07:06.060 member: explicitly disabled via build config 00:07:06.060 pcapng: explicitly disabled via build config 00:07:06.060 rawdev: explicitly disabled via build config 00:07:06.060 regexdev: explicitly disabled via build config 00:07:06.060 mldev: explicitly disabled via build config 00:07:06.060 rib: explicitly disabled via build config 00:07:06.060 sched: explicitly disabled via build config 00:07:06.060 stack: explicitly disabled via build config 00:07:06.060 ipsec: explicitly disabled via build config 00:07:06.060 pdcp: explicitly disabled via build config 00:07:06.060 fib: explicitly disabled via build config 00:07:06.060 port: explicitly disabled via build config 00:07:06.060 pdump: explicitly disabled via build config 00:07:06.060 table: explicitly disabled via build config 00:07:06.060 pipeline: explicitly disabled via build config 00:07:06.060 graph: explicitly disabled via build config 00:07:06.060 node: explicitly disabled via build config 00:07:06.060 00:07:06.060 drivers: 00:07:06.060 common/cpt: not in enabled drivers build config 00:07:06.060 common/dpaax: not in enabled drivers build config 00:07:06.060 common/iavf: not in enabled drivers build config 00:07:06.060 common/idpf: not in enabled drivers build config 00:07:06.060 common/ionic: not in enabled drivers build config 00:07:06.060 common/mvep: not in enabled drivers build config 00:07:06.060 common/octeontx: not in enabled drivers build config 00:07:06.060 bus/auxiliary: not in enabled drivers build config 00:07:06.060 bus/cdx: not in enabled drivers build config 00:07:06.060 bus/dpaa: not in enabled drivers build config 00:07:06.060 bus/fslmc: not in enabled drivers build config 00:07:06.060 bus/ifpga: not in enabled drivers build config 00:07:06.060 bus/platform: not in enabled drivers build config 00:07:06.060 bus/uacce: not in enabled drivers build config 00:07:06.060 bus/vmbus: not in enabled drivers build config 00:07:06.060 common/cnxk: not in enabled drivers build config 00:07:06.060 common/mlx5: not in enabled drivers build config 00:07:06.060 common/nfp: not in enabled drivers build config 00:07:06.060 common/nitrox: not in enabled drivers build config 00:07:06.060 common/qat: not in enabled drivers build config 00:07:06.060 common/sfc_efx: not in enabled drivers build config 00:07:06.060 mempool/bucket: not in enabled drivers build config 00:07:06.060 mempool/cnxk: not in enabled drivers build config 00:07:06.060 mempool/dpaa: not in enabled drivers build config 00:07:06.060 mempool/dpaa2: not in enabled drivers build config 00:07:06.060 mempool/octeontx: not in enabled drivers build config 00:07:06.060 mempool/stack: not in enabled drivers build config 00:07:06.060 dma/cnxk: not in enabled drivers build config 00:07:06.060 dma/dpaa: not in enabled drivers build config 00:07:06.060 dma/dpaa2: not in enabled drivers build config 00:07:06.060 dma/hisilicon: not in enabled drivers build config 00:07:06.060 dma/idxd: not in enabled drivers build config 00:07:06.061 dma/ioat: not in enabled drivers build config 00:07:06.061 dma/skeleton: not in enabled drivers build config 00:07:06.061 net/af_packet: not in enabled drivers build config 00:07:06.061 net/af_xdp: not in enabled drivers build config 00:07:06.061 net/ark: not in enabled drivers build config 00:07:06.061 net/atlantic: not in enabled drivers build config 00:07:06.061 net/avp: not in enabled drivers build config 00:07:06.061 net/axgbe: not in enabled drivers build config 00:07:06.061 net/bnx2x: not in enabled drivers build config 00:07:06.061 net/bnxt: not in enabled drivers build config 00:07:06.061 net/bonding: not in enabled drivers build config 00:07:06.061 net/cnxk: not in enabled drivers build config 00:07:06.061 net/cpfl: not in enabled drivers build config 00:07:06.061 net/cxgbe: not in enabled drivers build config 00:07:06.061 net/dpaa: not in enabled drivers build config 00:07:06.061 net/dpaa2: not in enabled drivers build config 00:07:06.061 net/e1000: not in enabled drivers build config 00:07:06.061 net/ena: not in enabled drivers build config 00:07:06.061 net/enetc: not in enabled drivers build config 00:07:06.061 net/enetfec: not in enabled drivers build config 00:07:06.061 net/enic: not in enabled drivers build config 00:07:06.061 net/failsafe: not in enabled drivers build config 00:07:06.061 net/fm10k: not in enabled drivers build config 00:07:06.061 net/gve: not in enabled drivers build config 00:07:06.061 net/hinic: not in enabled drivers build config 00:07:06.061 net/hns3: not in enabled drivers build config 00:07:06.061 net/i40e: not in enabled drivers build config 00:07:06.061 net/iavf: not in enabled drivers build config 00:07:06.061 net/ice: not in enabled drivers build config 00:07:06.061 net/idpf: not in enabled drivers build config 00:07:06.061 net/igc: not in enabled drivers build config 00:07:06.061 net/ionic: not in enabled drivers build config 00:07:06.061 net/ipn3ke: not in enabled drivers build config 00:07:06.061 net/ixgbe: not in enabled drivers build config 00:07:06.061 net/mana: not in enabled drivers build config 00:07:06.061 net/memif: not in enabled drivers build config 00:07:06.061 net/mlx4: not in enabled drivers build config 00:07:06.061 net/mlx5: not in enabled drivers build config 00:07:06.061 net/mvneta: not in enabled drivers build config 00:07:06.061 net/mvpp2: not in enabled drivers build config 00:07:06.061 net/netvsc: not in enabled drivers build config 00:07:06.061 net/nfb: not in enabled drivers build config 00:07:06.061 net/nfp: not in enabled drivers build config 00:07:06.061 net/ngbe: not in enabled drivers build config 00:07:06.061 net/null: not in enabled drivers build config 00:07:06.061 net/octeontx: not in enabled drivers build config 00:07:06.061 net/octeon_ep: not in enabled drivers build config 00:07:06.061 net/pcap: not in enabled drivers build config 00:07:06.061 net/pfe: not in enabled drivers build config 00:07:06.061 net/qede: not in enabled drivers build config 00:07:06.061 net/ring: not in enabled drivers build config 00:07:06.061 net/sfc: not in enabled drivers build config 00:07:06.061 net/softnic: not in enabled drivers build config 00:07:06.061 net/tap: not in enabled drivers build config 00:07:06.061 net/thunderx: not in enabled drivers build config 00:07:06.061 net/txgbe: not in enabled drivers build config 00:07:06.061 net/vdev_netvsc: not in enabled drivers build config 00:07:06.061 net/vhost: not in enabled drivers build config 00:07:06.061 net/virtio: not in enabled drivers build config 00:07:06.061 net/vmxnet3: not in enabled drivers build config 00:07:06.061 raw/*: missing internal dependency, "rawdev" 00:07:06.061 crypto/armv8: not in enabled drivers build config 00:07:06.061 crypto/bcmfs: not in enabled drivers build config 00:07:06.061 crypto/caam_jr: not in enabled drivers build config 00:07:06.061 crypto/ccp: not in enabled drivers build config 00:07:06.061 crypto/cnxk: not in enabled drivers build config 00:07:06.061 crypto/dpaa_sec: not in enabled drivers build config 00:07:06.061 crypto/dpaa2_sec: not in enabled drivers build config 00:07:06.061 crypto/ipsec_mb: not in enabled drivers build config 00:07:06.061 crypto/mlx5: not in enabled drivers build config 00:07:06.061 crypto/mvsam: not in enabled drivers build config 00:07:06.061 crypto/nitrox: not in enabled drivers build config 00:07:06.061 crypto/null: not in enabled drivers build config 00:07:06.061 crypto/octeontx: not in enabled drivers build config 00:07:06.061 crypto/openssl: not in enabled drivers build config 00:07:06.061 crypto/scheduler: not in enabled drivers build config 00:07:06.061 crypto/uadk: not in enabled drivers build config 00:07:06.061 crypto/virtio: not in enabled drivers build config 00:07:06.061 compress/isal: not in enabled drivers build config 00:07:06.061 compress/mlx5: not in enabled drivers build config 00:07:06.061 compress/nitrox: not in enabled drivers build config 00:07:06.061 compress/octeontx: not in enabled drivers build config 00:07:06.061 compress/zlib: not in enabled drivers build config 00:07:06.061 regex/*: missing internal dependency, "regexdev" 00:07:06.061 ml/*: missing internal dependency, "mldev" 00:07:06.061 vdpa/ifc: not in enabled drivers build config 00:07:06.061 vdpa/mlx5: not in enabled drivers build config 00:07:06.061 vdpa/nfp: not in enabled drivers build config 00:07:06.061 vdpa/sfc: not in enabled drivers build config 00:07:06.061 event/*: missing internal dependency, "eventdev" 00:07:06.061 baseband/*: missing internal dependency, "bbdev" 00:07:06.061 gpu/*: missing internal dependency, "gpudev" 00:07:06.061 00:07:06.061 00:07:06.629 Build targets in project: 85 00:07:06.629 00:07:06.629 DPDK 24.03.0 00:07:06.629 00:07:06.629 User defined options 00:07:06.629 buildtype : debug 00:07:06.629 default_library : shared 00:07:06.629 libdir : lib 00:07:06.629 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:06.629 b_sanitize : address 00:07:06.629 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:06.629 c_link_args : 00:07:06.629 cpu_instruction_set: native 00:07:06.629 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:06.630 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:06.630 enable_docs : false 00:07:06.630 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:06.630 enable_kmods : false 00:07:06.630 max_lcores : 128 00:07:06.630 tests : false 00:07:06.630 00:07:06.630 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:07.562 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:07.562 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:07.562 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:07.562 [3/268] Linking static target lib/librte_kvargs.a 00:07:07.562 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:07.562 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:07.562 [6/268] Linking static target lib/librte_log.a 00:07:08.127 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:08.127 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:08.127 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:08.127 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:08.127 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:08.385 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.385 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:08.386 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:08.386 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:08.386 [16/268] Linking static target lib/librte_telemetry.a 00:07:08.386 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:08.386 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:08.951 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.209 [20/268] Linking target lib/librte_log.so.24.1 00:07:09.209 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:09.466 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:09.466 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:09.466 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:09.466 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:09.467 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:09.467 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.725 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:09.725 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:09.725 [30/268] Linking target lib/librte_kvargs.so.24.1 00:07:09.725 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:09.725 [32/268] Linking target lib/librte_telemetry.so.24.1 00:07:09.983 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:09.983 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:09.983 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:10.241 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:10.241 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:10.241 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:10.498 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:10.498 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:10.498 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:10.498 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:10.498 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:10.758 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:10.758 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:11.323 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:11.323 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:11.323 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:11.580 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:11.580 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:11.580 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:11.838 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:11.838 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:11.838 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:11.838 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:12.095 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:12.352 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:12.352 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:12.352 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:12.611 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:12.611 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:12.611 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:12.611 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:12.611 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:12.611 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:12.873 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:13.443 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:13.700 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:13.700 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:13.958 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:13.958 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:13.958 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:14.215 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:14.215 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:14.215 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:14.215 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:14.215 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:14.215 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:14.473 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:14.757 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:14.757 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:15.015 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:15.015 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:15.274 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:15.274 [85/268] Linking static target lib/librte_eal.a 00:07:15.274 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:15.531 [87/268] Linking static target lib/librte_ring.a 00:07:15.531 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:15.789 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:15.789 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:16.047 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:16.047 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:16.047 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:16.047 [94/268] Linking static target lib/librte_mempool.a 00:07:16.047 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.304 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:16.304 [97/268] Linking static target lib/librte_rcu.a 00:07:16.561 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:16.818 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:16.818 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:16.818 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:16.818 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:16.818 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:17.075 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:17.334 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.334 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:17.334 [107/268] Linking static target lib/librte_net.a 00:07:17.591 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:17.591 [109/268] Linking static target lib/librte_meter.a 00:07:17.849 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:17.849 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:17.850 [112/268] Linking static target lib/librte_mbuf.a 00:07:17.850 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.108 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:18.108 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:18.366 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.366 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.366 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:18.623 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:19.187 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:19.187 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.187 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:19.754 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:19.754 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:19.754 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:20.013 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:20.013 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:20.013 [128/268] Linking static target lib/librte_pci.a 00:07:20.013 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:20.270 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:20.270 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:20.270 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:20.528 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:20.528 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:20.528 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:20.528 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.528 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:20.785 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:20.785 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:20.785 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:20.785 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:20.785 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:21.043 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:21.043 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:21.043 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:21.043 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:21.043 [147/268] Linking static target lib/librte_cmdline.a 00:07:21.609 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:21.867 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:21.867 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:21.867 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:21.867 [152/268] Linking static target lib/librte_timer.a 00:07:21.867 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:22.124 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:22.382 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:22.382 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:22.952 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.952 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:22.952 [159/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:23.210 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:23.210 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:23.210 [162/268] Linking static target lib/librte_compressdev.a 00:07:23.468 [163/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:23.468 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:23.468 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:23.468 [166/268] Linking static target lib/librte_dmadev.a 00:07:23.468 [167/268] Linking static target lib/librte_ethdev.a 00:07:23.725 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:23.983 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:23.983 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:23.983 [171/268] Linking static target lib/librte_hash.a 00:07:23.983 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:23.983 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:24.241 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:24.809 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:24.809 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.809 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:24.809 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:24.809 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.126 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:25.126 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:25.695 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:25.695 [183/268] Linking static target lib/librte_cryptodev.a 00:07:25.695 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.953 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:25.953 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:25.953 [187/268] Linking static target lib/librte_power.a 00:07:26.211 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:26.211 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:26.211 [190/268] Linking static target lib/librte_reorder.a 00:07:26.469 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:26.727 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:26.727 [193/268] Linking static target lib/librte_security.a 00:07:27.292 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.922 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.922 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:27.922 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.180 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:28.180 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:28.437 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:28.437 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:29.003 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:29.003 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:29.003 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:29.261 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:29.261 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:29.519 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.519 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:29.519 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:29.776 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:29.776 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:30.033 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:30.291 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:30.291 [214/268] Linking static target drivers/librte_bus_vdev.a 00:07:30.291 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:30.291 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:30.291 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:30.291 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:30.291 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:30.291 [220/268] Linking static target drivers/librte_bus_pci.a 00:07:30.291 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:30.549 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:30.807 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:30.807 [224/268] Linking static target drivers/librte_mempool_ring.a 00:07:30.807 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:30.807 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:31.371 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.794 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.794 [229/268] Linking target lib/librte_eal.so.24.1 00:07:33.052 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:33.310 [231/268] Linking target lib/librte_meter.so.24.1 00:07:33.310 [232/268] Linking target lib/librte_ring.so.24.1 00:07:33.310 [233/268] Linking target lib/librte_pci.so.24.1 00:07:33.310 [234/268] Linking target lib/librte_timer.so.24.1 00:07:33.310 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:33.310 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:33.569 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:33.569 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:33.569 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:33.569 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:33.569 [241/268] Linking target lib/librte_rcu.so.24.1 00:07:33.827 [242/268] Linking target lib/librte_mempool.so.24.1 00:07:33.827 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:33.827 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:33.827 [245/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:33.827 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:33.827 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:34.085 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:34.085 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:34.344 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:34.344 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:34.344 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:07:34.344 [253/268] Linking target lib/librte_net.so.24.1 00:07:34.344 [254/268] Linking target lib/librte_compressdev.so.24.1 00:07:34.601 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:34.601 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:34.860 [257/268] Linking target lib/librte_security.so.24.1 00:07:34.860 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:34.860 [259/268] Linking target lib/librte_hash.so.24.1 00:07:35.118 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:35.378 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.378 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:35.640 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:35.898 [264/268] Linking target lib/librte_power.so.24.1 00:07:44.005 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:44.005 [266/268] Linking static target lib/librte_vhost.a 00:07:45.905 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.163 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:46.163 INFO: autodetecting backend as ninja 00:07:46.163 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:24.968 CC lib/log/log.o 00:08:24.968 CC lib/log/log_flags.o 00:08:24.968 CC lib/log/log_deprecated.o 00:08:24.968 CC lib/ut_mock/mock.o 00:08:24.968 CC lib/ut/ut.o 00:08:24.968 LIB libspdk_ut.a 00:08:24.968 LIB libspdk_log.a 00:08:24.968 LIB libspdk_ut_mock.a 00:08:24.968 SO libspdk_ut.so.2.0 00:08:24.968 SO libspdk_log.so.7.1 00:08:24.968 SO libspdk_ut_mock.so.6.0 00:08:24.968 SYMLINK libspdk_ut.so 00:08:24.968 SYMLINK libspdk_log.so 00:08:24.968 SYMLINK libspdk_ut_mock.so 00:08:24.968 CC lib/util/base64.o 00:08:24.968 CC lib/ioat/ioat.o 00:08:24.968 CC lib/util/bit_array.o 00:08:24.968 CXX lib/trace_parser/trace.o 00:08:24.968 CC lib/util/cpuset.o 00:08:24.968 CC lib/util/crc16.o 00:08:24.968 CC lib/util/crc32.o 00:08:24.968 CC lib/dma/dma.o 00:08:24.968 CC lib/util/crc32c.o 00:08:24.968 CC lib/util/crc32_ieee.o 00:08:24.968 CC lib/vfio_user/host/vfio_user_pci.o 00:08:24.968 CC lib/util/crc64.o 00:08:24.968 CC lib/util/dif.o 00:08:24.968 CC lib/util/fd.o 00:08:24.968 LIB libspdk_dma.a 00:08:24.968 SO libspdk_dma.so.5.0 00:08:24.968 CC lib/vfio_user/host/vfio_user.o 00:08:24.968 CC lib/util/fd_group.o 00:08:24.968 CC lib/util/file.o 00:08:24.968 SYMLINK libspdk_dma.so 00:08:24.968 CC lib/util/hexlify.o 00:08:24.968 CC lib/util/iov.o 00:08:24.968 CC lib/util/math.o 00:08:24.968 LIB libspdk_ioat.a 00:08:24.968 SO libspdk_ioat.so.7.0 00:08:24.968 SYMLINK libspdk_ioat.so 00:08:24.968 CC lib/util/net.o 00:08:24.968 CC lib/util/pipe.o 00:08:24.968 CC lib/util/strerror_tls.o 00:08:24.968 LIB libspdk_vfio_user.a 00:08:24.968 CC lib/util/string.o 00:08:24.968 CC lib/util/uuid.o 00:08:24.968 SO libspdk_vfio_user.so.5.0 00:08:24.968 CC lib/util/xor.o 00:08:24.968 CC lib/util/zipf.o 00:08:24.968 SYMLINK libspdk_vfio_user.so 00:08:24.968 CC lib/util/md5.o 00:08:24.968 LIB libspdk_util.a 00:08:24.968 SO libspdk_util.so.10.1 00:08:24.968 SYMLINK libspdk_util.so 00:08:24.968 LIB libspdk_trace_parser.a 00:08:24.968 SO libspdk_trace_parser.so.6.0 00:08:24.968 SYMLINK libspdk_trace_parser.so 00:08:24.968 CC lib/vmd/vmd.o 00:08:24.968 CC lib/vmd/led.o 00:08:24.968 CC lib/conf/conf.o 00:08:24.968 CC lib/idxd/idxd.o 00:08:24.968 CC lib/idxd/idxd_user.o 00:08:24.968 CC lib/idxd/idxd_kernel.o 00:08:24.968 CC lib/rdma_utils/rdma_utils.o 00:08:24.968 CC lib/json/json_parse.o 00:08:24.968 CC lib/json/json_util.o 00:08:24.968 CC lib/env_dpdk/env.o 00:08:24.968 CC lib/env_dpdk/memory.o 00:08:24.968 CC lib/env_dpdk/pci.o 00:08:24.968 CC lib/env_dpdk/init.o 00:08:24.968 LIB libspdk_conf.a 00:08:24.968 CC lib/json/json_write.o 00:08:24.968 LIB libspdk_rdma_utils.a 00:08:24.969 SO libspdk_conf.so.6.0 00:08:24.969 SO libspdk_rdma_utils.so.1.0 00:08:24.969 CC lib/env_dpdk/threads.o 00:08:24.969 SYMLINK libspdk_conf.so 00:08:24.969 CC lib/env_dpdk/pci_ioat.o 00:08:24.969 SYMLINK libspdk_rdma_utils.so 00:08:24.969 CC lib/env_dpdk/pci_virtio.o 00:08:24.969 LIB libspdk_json.a 00:08:24.969 CC lib/env_dpdk/pci_vmd.o 00:08:24.969 SO libspdk_json.so.6.0 00:08:24.969 CC lib/env_dpdk/pci_idxd.o 00:08:24.969 CC lib/env_dpdk/sigbus_handler.o 00:08:24.969 CC lib/env_dpdk/pci_event.o 00:08:24.969 SYMLINK libspdk_json.so 00:08:24.969 CC lib/env_dpdk/pci_dpdk.o 00:08:24.969 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:24.969 CC lib/rdma_provider/common.o 00:08:24.969 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:24.969 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:24.969 CC lib/jsonrpc/jsonrpc_server.o 00:08:24.969 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:24.969 CC lib/jsonrpc/jsonrpc_client.o 00:08:24.969 LIB libspdk_vmd.a 00:08:24.969 LIB libspdk_idxd.a 00:08:24.969 SO libspdk_vmd.so.6.0 00:08:24.969 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:24.969 SO libspdk_idxd.so.12.1 00:08:24.969 SYMLINK libspdk_vmd.so 00:08:24.969 LIB libspdk_rdma_provider.a 00:08:24.969 SYMLINK libspdk_idxd.so 00:08:24.969 SO libspdk_rdma_provider.so.7.0 00:08:24.969 SYMLINK libspdk_rdma_provider.so 00:08:24.969 LIB libspdk_jsonrpc.a 00:08:24.969 SO libspdk_jsonrpc.so.6.0 00:08:24.969 SYMLINK libspdk_jsonrpc.so 00:08:24.969 CC lib/rpc/rpc.o 00:08:24.969 LIB libspdk_env_dpdk.a 00:08:24.969 LIB libspdk_rpc.a 00:08:24.969 SO libspdk_rpc.so.6.0 00:08:24.969 SO libspdk_env_dpdk.so.15.1 00:08:24.969 SYMLINK libspdk_rpc.so 00:08:24.969 SYMLINK libspdk_env_dpdk.so 00:08:24.969 CC lib/trace/trace.o 00:08:24.969 CC lib/trace/trace_flags.o 00:08:24.969 CC lib/trace/trace_rpc.o 00:08:24.969 CC lib/notify/notify.o 00:08:24.969 CC lib/keyring/keyring.o 00:08:24.969 CC lib/notify/notify_rpc.o 00:08:24.969 CC lib/keyring/keyring_rpc.o 00:08:25.226 LIB libspdk_notify.a 00:08:25.226 SO libspdk_notify.so.6.0 00:08:25.226 LIB libspdk_trace.a 00:08:25.226 SYMLINK libspdk_notify.so 00:08:25.226 SO libspdk_trace.so.11.0 00:08:25.483 LIB libspdk_keyring.a 00:08:25.483 SO libspdk_keyring.so.2.0 00:08:25.483 SYMLINK libspdk_trace.so 00:08:25.483 SYMLINK libspdk_keyring.so 00:08:25.741 CC lib/thread/thread.o 00:08:25.741 CC lib/thread/iobuf.o 00:08:25.741 CC lib/sock/sock_rpc.o 00:08:25.741 CC lib/sock/sock.o 00:08:26.674 LIB libspdk_sock.a 00:08:26.674 SO libspdk_sock.so.10.0 00:08:26.931 SYMLINK libspdk_sock.so 00:08:27.188 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:27.188 CC lib/nvme/nvme_ctrlr.o 00:08:27.188 CC lib/nvme/nvme_fabric.o 00:08:27.188 CC lib/nvme/nvme_ns_cmd.o 00:08:27.188 CC lib/nvme/nvme_ns.o 00:08:27.188 CC lib/nvme/nvme_pcie_common.o 00:08:27.188 CC lib/nvme/nvme_pcie.o 00:08:27.188 CC lib/nvme/nvme.o 00:08:27.188 CC lib/nvme/nvme_qpair.o 00:08:28.562 CC lib/nvme/nvme_quirks.o 00:08:28.562 CC lib/nvme/nvme_transport.o 00:08:28.562 CC lib/nvme/nvme_discovery.o 00:08:28.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:29.078 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:29.078 CC lib/nvme/nvme_tcp.o 00:08:29.078 CC lib/nvme/nvme_opal.o 00:08:29.078 CC lib/nvme/nvme_io_msg.o 00:08:29.644 LIB libspdk_thread.a 00:08:29.644 CC lib/nvme/nvme_poll_group.o 00:08:29.644 SO libspdk_thread.so.11.0 00:08:29.902 CC lib/nvme/nvme_zns.o 00:08:29.902 SYMLINK libspdk_thread.so 00:08:29.902 CC lib/nvme/nvme_stubs.o 00:08:30.159 CC lib/nvme/nvme_auth.o 00:08:30.417 CC lib/accel/accel.o 00:08:30.417 CC lib/blob/blobstore.o 00:08:30.675 CC lib/init/json_config.o 00:08:30.675 CC lib/virtio/virtio.o 00:08:30.934 CC lib/virtio/virtio_vhost_user.o 00:08:30.934 CC lib/accel/accel_rpc.o 00:08:31.192 CC lib/init/subsystem.o 00:08:31.192 CC lib/init/subsystem_rpc.o 00:08:31.450 CC lib/accel/accel_sw.o 00:08:31.450 CC lib/init/rpc.o 00:08:31.451 CC lib/blob/request.o 00:08:31.451 CC lib/blob/zeroes.o 00:08:31.451 CC lib/virtio/virtio_vfio_user.o 00:08:31.726 CC lib/virtio/virtio_pci.o 00:08:31.984 LIB libspdk_init.a 00:08:31.984 SO libspdk_init.so.6.0 00:08:31.984 CC lib/blob/blob_bs_dev.o 00:08:32.242 SYMLINK libspdk_init.so 00:08:32.242 CC lib/nvme/nvme_cuse.o 00:08:32.242 CC lib/nvme/nvme_rdma.o 00:08:32.242 CC lib/fsdev/fsdev.o 00:08:32.242 CC lib/fsdev/fsdev_io.o 00:08:32.500 CC lib/event/app.o 00:08:32.500 LIB libspdk_virtio.a 00:08:32.500 CC lib/fsdev/fsdev_rpc.o 00:08:32.500 SO libspdk_virtio.so.7.0 00:08:32.758 SYMLINK libspdk_virtio.so 00:08:32.758 CC lib/event/reactor.o 00:08:32.758 CC lib/event/log_rpc.o 00:08:33.016 LIB libspdk_accel.a 00:08:33.016 SO libspdk_accel.so.16.0 00:08:33.016 CC lib/event/app_rpc.o 00:08:33.016 CC lib/event/scheduler_static.o 00:08:33.273 SYMLINK libspdk_accel.so 00:08:33.531 CC lib/bdev/bdev.o 00:08:33.531 CC lib/bdev/bdev_rpc.o 00:08:33.531 CC lib/bdev/bdev_zone.o 00:08:33.531 CC lib/bdev/part.o 00:08:33.788 CC lib/bdev/scsi_nvme.o 00:08:33.788 LIB libspdk_event.a 00:08:33.788 SO libspdk_event.so.14.0 00:08:33.788 LIB libspdk_fsdev.a 00:08:33.788 SO libspdk_fsdev.so.2.0 00:08:34.045 SYMLINK libspdk_event.so 00:08:34.045 SYMLINK libspdk_fsdev.so 00:08:34.303 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:34.868 LIB libspdk_nvme.a 00:08:35.125 SO libspdk_nvme.so.15.0 00:08:35.383 LIB libspdk_fuse_dispatcher.a 00:08:35.383 SO libspdk_fuse_dispatcher.so.1.0 00:08:35.383 SYMLINK libspdk_fuse_dispatcher.so 00:08:35.383 SYMLINK libspdk_nvme.so 00:08:36.758 LIB libspdk_blob.a 00:08:37.025 SO libspdk_blob.so.11.0 00:08:37.025 SYMLINK libspdk_blob.so 00:08:37.283 CC lib/blobfs/blobfs.o 00:08:37.283 CC lib/blobfs/tree.o 00:08:37.283 CC lib/lvol/lvol.o 00:08:37.851 LIB libspdk_bdev.a 00:08:37.851 SO libspdk_bdev.so.17.0 00:08:38.109 SYMLINK libspdk_bdev.so 00:08:38.109 CC lib/nvmf/ctrlr_discovery.o 00:08:38.109 CC lib/nvmf/ctrlr.o 00:08:38.109 CC lib/nvmf/subsystem.o 00:08:38.109 CC lib/nvmf/ctrlr_bdev.o 00:08:38.110 CC lib/ublk/ublk.o 00:08:38.110 CC lib/ftl/ftl_core.o 00:08:38.110 CC lib/scsi/dev.o 00:08:38.367 CC lib/nbd/nbd.o 00:08:38.626 LIB libspdk_blobfs.a 00:08:38.626 SO libspdk_blobfs.so.10.0 00:08:38.626 CC lib/scsi/lun.o 00:08:38.626 SYMLINK libspdk_blobfs.so 00:08:38.626 CC lib/nbd/nbd_rpc.o 00:08:38.626 LIB libspdk_lvol.a 00:08:38.883 SO libspdk_lvol.so.10.0 00:08:38.883 CC lib/ftl/ftl_init.o 00:08:38.884 SYMLINK libspdk_lvol.so 00:08:38.884 CC lib/ftl/ftl_layout.o 00:08:38.884 CC lib/ftl/ftl_debug.o 00:08:38.884 CC lib/ublk/ublk_rpc.o 00:08:38.884 CC lib/scsi/port.o 00:08:39.141 CC lib/ftl/ftl_io.o 00:08:39.141 CC lib/ftl/ftl_sb.o 00:08:39.141 LIB libspdk_nbd.a 00:08:39.142 CC lib/scsi/scsi.o 00:08:39.142 SO libspdk_nbd.so.7.0 00:08:39.142 CC lib/ftl/ftl_l2p.o 00:08:39.461 CC lib/ftl/ftl_l2p_flat.o 00:08:39.462 CC lib/ftl/ftl_nv_cache.o 00:08:39.462 SYMLINK libspdk_nbd.so 00:08:39.462 CC lib/nvmf/nvmf.o 00:08:39.462 CC lib/ftl/ftl_band.o 00:08:39.462 LIB libspdk_ublk.a 00:08:39.462 CC lib/scsi/scsi_bdev.o 00:08:39.462 SO libspdk_ublk.so.3.0 00:08:39.462 CC lib/ftl/ftl_band_ops.o 00:08:39.462 SYMLINK libspdk_ublk.so 00:08:39.462 CC lib/nvmf/nvmf_rpc.o 00:08:39.719 CC lib/scsi/scsi_pr.o 00:08:39.719 CC lib/ftl/ftl_writer.o 00:08:39.977 CC lib/ftl/ftl_rq.o 00:08:39.977 CC lib/ftl/ftl_reloc.o 00:08:39.977 CC lib/ftl/ftl_l2p_cache.o 00:08:39.977 CC lib/ftl/ftl_p2l.o 00:08:40.236 CC lib/nvmf/transport.o 00:08:40.236 CC lib/scsi/scsi_rpc.o 00:08:40.494 CC lib/scsi/task.o 00:08:40.494 CC lib/nvmf/tcp.o 00:08:40.494 CC lib/nvmf/stubs.o 00:08:40.494 CC lib/nvmf/mdns_server.o 00:08:40.494 CC lib/nvmf/rdma.o 00:08:40.752 CC lib/ftl/ftl_p2l_log.o 00:08:40.752 LIB libspdk_scsi.a 00:08:40.752 CC lib/ftl/mngt/ftl_mngt.o 00:08:40.752 SO libspdk_scsi.so.9.0 00:08:40.752 CC lib/nvmf/auth.o 00:08:40.752 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:41.010 SYMLINK libspdk_scsi.so 00:08:41.010 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:41.010 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:41.010 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:41.010 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:41.269 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:41.269 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:41.269 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:41.269 CC lib/vhost/vhost.o 00:08:41.269 CC lib/iscsi/conn.o 00:08:41.269 CC lib/iscsi/init_grp.o 00:08:41.527 CC lib/iscsi/iscsi.o 00:08:41.527 CC lib/iscsi/param.o 00:08:41.527 CC lib/iscsi/portal_grp.o 00:08:41.785 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:41.785 CC lib/iscsi/tgt_node.o 00:08:41.785 CC lib/iscsi/iscsi_subsystem.o 00:08:42.043 CC lib/iscsi/iscsi_rpc.o 00:08:42.043 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:42.043 CC lib/iscsi/task.o 00:08:42.043 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:42.300 CC lib/vhost/vhost_rpc.o 00:08:42.300 CC lib/vhost/vhost_scsi.o 00:08:42.300 CC lib/vhost/vhost_blk.o 00:08:42.557 CC lib/vhost/rte_vhost_user.o 00:08:42.557 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:42.557 CC lib/ftl/utils/ftl_conf.o 00:08:42.814 CC lib/ftl/utils/ftl_md.o 00:08:42.814 CC lib/ftl/utils/ftl_mempool.o 00:08:42.814 CC lib/ftl/utils/ftl_bitmap.o 00:08:42.814 CC lib/ftl/utils/ftl_property.o 00:08:43.071 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:43.071 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:43.071 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:43.328 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:43.328 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:43.328 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:43.328 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:43.328 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:43.328 LIB libspdk_iscsi.a 00:08:43.585 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:43.585 SO libspdk_iscsi.so.8.0 00:08:43.585 LIB libspdk_nvmf.a 00:08:43.585 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:43.585 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:43.585 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:43.585 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:43.842 SO libspdk_nvmf.so.20.0 00:08:43.842 CC lib/ftl/base/ftl_base_dev.o 00:08:43.842 SYMLINK libspdk_iscsi.so 00:08:43.842 CC lib/ftl/base/ftl_base_bdev.o 00:08:43.842 CC lib/ftl/ftl_trace.o 00:08:43.842 LIB libspdk_vhost.a 00:08:43.842 SO libspdk_vhost.so.8.0 00:08:44.100 SYMLINK libspdk_nvmf.so 00:08:44.100 SYMLINK libspdk_vhost.so 00:08:44.100 LIB libspdk_ftl.a 00:08:44.358 SO libspdk_ftl.so.9.0 00:08:44.616 SYMLINK libspdk_ftl.so 00:08:45.181 CC module/env_dpdk/env_dpdk_rpc.o 00:08:45.181 CC module/fsdev/aio/fsdev_aio.o 00:08:45.181 CC module/keyring/linux/keyring.o 00:08:45.181 CC module/scheduler/gscheduler/gscheduler.o 00:08:45.439 CC module/sock/posix/posix.o 00:08:45.439 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:45.439 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:45.439 CC module/keyring/file/keyring.o 00:08:45.439 CC module/blob/bdev/blob_bdev.o 00:08:45.439 CC module/accel/error/accel_error.o 00:08:45.439 LIB libspdk_env_dpdk_rpc.a 00:08:45.439 SO libspdk_env_dpdk_rpc.so.6.0 00:08:45.696 CC module/keyring/linux/keyring_rpc.o 00:08:45.696 LIB libspdk_scheduler_dpdk_governor.a 00:08:45.696 LIB libspdk_scheduler_gscheduler.a 00:08:45.697 SYMLINK libspdk_env_dpdk_rpc.so 00:08:45.697 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:45.697 CC module/accel/error/accel_error_rpc.o 00:08:45.697 LIB libspdk_scheduler_dynamic.a 00:08:45.697 SO libspdk_scheduler_gscheduler.so.4.0 00:08:45.697 CC module/keyring/file/keyring_rpc.o 00:08:45.697 SO libspdk_scheduler_dynamic.so.4.0 00:08:45.697 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:45.697 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:45.697 SYMLINK libspdk_scheduler_gscheduler.so 00:08:45.697 SYMLINK libspdk_scheduler_dynamic.so 00:08:45.954 LIB libspdk_keyring_linux.a 00:08:45.954 LIB libspdk_keyring_file.a 00:08:45.954 SO libspdk_keyring_linux.so.1.0 00:08:45.954 SO libspdk_keyring_file.so.2.0 00:08:45.954 LIB libspdk_accel_error.a 00:08:45.954 LIB libspdk_blob_bdev.a 00:08:45.954 SO libspdk_accel_error.so.2.0 00:08:45.954 CC module/accel/ioat/accel_ioat.o 00:08:45.954 CC module/fsdev/aio/linux_aio_mgr.o 00:08:45.954 SO libspdk_blob_bdev.so.11.0 00:08:45.954 SYMLINK libspdk_keyring_linux.so 00:08:45.954 CC module/accel/ioat/accel_ioat_rpc.o 00:08:45.954 SYMLINK libspdk_keyring_file.so 00:08:45.954 CC module/accel/iaa/accel_iaa.o 00:08:45.954 CC module/accel/iaa/accel_iaa_rpc.o 00:08:46.213 CC module/accel/dsa/accel_dsa.o 00:08:46.213 SYMLINK libspdk_accel_error.so 00:08:46.213 CC module/accel/dsa/accel_dsa_rpc.o 00:08:46.213 SYMLINK libspdk_blob_bdev.so 00:08:46.213 LIB libspdk_fsdev_aio.a 00:08:46.471 LIB libspdk_accel_ioat.a 00:08:46.471 SO libspdk_fsdev_aio.so.1.0 00:08:46.471 SO libspdk_accel_ioat.so.6.0 00:08:46.471 LIB libspdk_accel_iaa.a 00:08:46.471 SYMLINK libspdk_accel_ioat.so 00:08:46.471 SYMLINK libspdk_fsdev_aio.so 00:08:46.471 SO libspdk_accel_iaa.so.3.0 00:08:46.728 CC module/bdev/error/vbdev_error.o 00:08:46.728 CC module/bdev/lvol/vbdev_lvol.o 00:08:46.728 CC module/bdev/delay/vbdev_delay.o 00:08:46.728 SYMLINK libspdk_accel_iaa.so 00:08:46.728 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:46.728 CC module/bdev/gpt/gpt.o 00:08:46.728 CC module/blobfs/bdev/blobfs_bdev.o 00:08:46.728 LIB libspdk_accel_dsa.a 00:08:46.728 SO libspdk_accel_dsa.so.5.0 00:08:46.728 CC module/bdev/malloc/bdev_malloc.o 00:08:46.728 CC module/bdev/null/bdev_null.o 00:08:46.728 LIB libspdk_sock_posix.a 00:08:46.728 SYMLINK libspdk_accel_dsa.so 00:08:46.985 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:46.986 SO libspdk_sock_posix.so.6.0 00:08:46.986 CC module/bdev/gpt/vbdev_gpt.o 00:08:46.986 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:46.986 CC module/bdev/error/vbdev_error_rpc.o 00:08:46.986 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:46.986 SYMLINK libspdk_sock_posix.so 00:08:46.986 CC module/bdev/null/bdev_null_rpc.o 00:08:47.243 LIB libspdk_bdev_error.a 00:08:47.243 SO libspdk_bdev_error.so.6.0 00:08:47.243 LIB libspdk_blobfs_bdev.a 00:08:47.243 CC module/bdev/nvme/bdev_nvme.o 00:08:47.243 SO libspdk_blobfs_bdev.so.6.0 00:08:47.243 LIB libspdk_bdev_malloc.a 00:08:47.243 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:47.243 CC module/bdev/nvme/nvme_rpc.o 00:08:47.243 SYMLINK libspdk_bdev_error.so 00:08:47.243 SO libspdk_bdev_malloc.so.6.0 00:08:47.501 SYMLINK libspdk_blobfs_bdev.so 00:08:47.501 LIB libspdk_bdev_null.a 00:08:47.501 CC module/bdev/nvme/bdev_mdns_client.o 00:08:47.501 SYMLINK libspdk_bdev_malloc.so 00:08:47.501 LIB libspdk_bdev_delay.a 00:08:47.501 SO libspdk_bdev_null.so.6.0 00:08:47.501 SO libspdk_bdev_delay.so.6.0 00:08:47.501 LIB libspdk_bdev_gpt.a 00:08:47.501 SO libspdk_bdev_gpt.so.6.0 00:08:47.501 CC module/bdev/passthru/vbdev_passthru.o 00:08:47.501 SYMLINK libspdk_bdev_delay.so 00:08:47.501 CC module/bdev/nvme/vbdev_opal.o 00:08:47.501 SYMLINK libspdk_bdev_null.so 00:08:47.758 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:47.758 CC module/bdev/raid/bdev_raid.o 00:08:47.758 SYMLINK libspdk_bdev_gpt.so 00:08:47.758 CC module/bdev/raid/bdev_raid_rpc.o 00:08:47.758 CC module/bdev/raid/bdev_raid_sb.o 00:08:47.758 CC module/bdev/split/vbdev_split.o 00:08:48.015 CC module/bdev/split/vbdev_split_rpc.o 00:08:48.015 LIB libspdk_bdev_lvol.a 00:08:48.015 SO libspdk_bdev_lvol.so.6.0 00:08:48.015 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:48.272 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:48.272 SYMLINK libspdk_bdev_lvol.so 00:08:48.272 LIB libspdk_bdev_split.a 00:08:48.272 LIB libspdk_bdev_passthru.a 00:08:48.272 CC module/bdev/raid/raid0.o 00:08:48.530 SO libspdk_bdev_passthru.so.6.0 00:08:48.530 SO libspdk_bdev_split.so.6.0 00:08:48.530 CC module/bdev/xnvme/bdev_xnvme.o 00:08:48.530 SYMLINK libspdk_bdev_passthru.so 00:08:48.530 CC module/bdev/aio/bdev_aio.o 00:08:48.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:48.530 SYMLINK libspdk_bdev_split.so 00:08:48.530 CC module/bdev/raid/raid1.o 00:08:48.787 CC module/bdev/ftl/bdev_ftl.o 00:08:48.787 CC module/bdev/iscsi/bdev_iscsi.o 00:08:48.787 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:48.787 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:08:49.045 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:49.045 CC module/bdev/aio/bdev_aio_rpc.o 00:08:49.302 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:49.302 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:49.302 LIB libspdk_bdev_aio.a 00:08:49.302 LIB libspdk_bdev_xnvme.a 00:08:49.302 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:49.302 SO libspdk_bdev_xnvme.so.3.0 00:08:49.302 SO libspdk_bdev_aio.so.6.0 00:08:49.302 CC module/bdev/raid/concat.o 00:08:49.302 SYMLINK libspdk_bdev_aio.so 00:08:49.302 SYMLINK libspdk_bdev_xnvme.so 00:08:49.302 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:49.560 LIB libspdk_bdev_zone_block.a 00:08:49.560 SO libspdk_bdev_zone_block.so.6.0 00:08:49.560 LIB libspdk_bdev_iscsi.a 00:08:49.560 LIB libspdk_bdev_ftl.a 00:08:49.560 SO libspdk_bdev_iscsi.so.6.0 00:08:49.560 SYMLINK libspdk_bdev_zone_block.so 00:08:49.560 SO libspdk_bdev_ftl.so.6.0 00:08:49.560 LIB libspdk_bdev_virtio.a 00:08:49.818 SYMLINK libspdk_bdev_iscsi.so 00:08:49.818 SO libspdk_bdev_virtio.so.6.0 00:08:49.818 SYMLINK libspdk_bdev_ftl.so 00:08:49.818 SYMLINK libspdk_bdev_virtio.so 00:08:49.818 LIB libspdk_bdev_raid.a 00:08:49.818 SO libspdk_bdev_raid.so.6.0 00:08:50.076 SYMLINK libspdk_bdev_raid.so 00:08:51.973 LIB libspdk_bdev_nvme.a 00:08:51.973 SO libspdk_bdev_nvme.so.7.1 00:08:51.973 SYMLINK libspdk_bdev_nvme.so 00:08:52.539 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:52.539 CC module/event/subsystems/keyring/keyring.o 00:08:52.539 CC module/event/subsystems/iobuf/iobuf.o 00:08:52.539 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:52.539 CC module/event/subsystems/sock/sock.o 00:08:52.539 CC module/event/subsystems/vmd/vmd.o 00:08:52.539 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:52.539 CC module/event/subsystems/scheduler/scheduler.o 00:08:52.539 CC module/event/subsystems/fsdev/fsdev.o 00:08:52.539 LIB libspdk_event_keyring.a 00:08:52.799 LIB libspdk_event_vhost_blk.a 00:08:52.799 SO libspdk_event_keyring.so.1.0 00:08:52.799 LIB libspdk_event_vmd.a 00:08:52.799 LIB libspdk_event_iobuf.a 00:08:52.799 SO libspdk_event_vhost_blk.so.3.0 00:08:52.799 LIB libspdk_event_fsdev.a 00:08:52.799 SO libspdk_event_vmd.so.6.0 00:08:52.799 LIB libspdk_event_scheduler.a 00:08:52.799 SO libspdk_event_iobuf.so.3.0 00:08:52.799 LIB libspdk_event_sock.a 00:08:52.799 SO libspdk_event_fsdev.so.1.0 00:08:52.799 SYMLINK libspdk_event_vhost_blk.so 00:08:52.799 SYMLINK libspdk_event_keyring.so 00:08:52.799 SO libspdk_event_scheduler.so.4.0 00:08:52.799 SO libspdk_event_sock.so.5.0 00:08:52.799 SYMLINK libspdk_event_vmd.so 00:08:52.799 SYMLINK libspdk_event_fsdev.so 00:08:52.799 SYMLINK libspdk_event_iobuf.so 00:08:52.799 SYMLINK libspdk_event_scheduler.so 00:08:52.799 SYMLINK libspdk_event_sock.so 00:08:53.062 CC module/event/subsystems/accel/accel.o 00:08:53.320 LIB libspdk_event_accel.a 00:08:53.320 SO libspdk_event_accel.so.6.0 00:08:53.577 SYMLINK libspdk_event_accel.so 00:08:53.835 CC module/event/subsystems/bdev/bdev.o 00:08:53.835 LIB libspdk_event_bdev.a 00:08:54.093 SO libspdk_event_bdev.so.6.0 00:08:54.093 SYMLINK libspdk_event_bdev.so 00:08:54.351 CC module/event/subsystems/scsi/scsi.o 00:08:54.351 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:54.351 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:54.351 CC module/event/subsystems/nbd/nbd.o 00:08:54.351 CC module/event/subsystems/ublk/ublk.o 00:08:54.608 LIB libspdk_event_scsi.a 00:08:54.608 LIB libspdk_event_nbd.a 00:08:54.608 LIB libspdk_event_ublk.a 00:08:54.608 SO libspdk_event_scsi.so.6.0 00:08:54.608 SO libspdk_event_nbd.so.6.0 00:08:54.608 SO libspdk_event_ublk.so.3.0 00:08:54.866 SYMLINK libspdk_event_scsi.so 00:08:54.866 SYMLINK libspdk_event_nbd.so 00:08:54.866 SYMLINK libspdk_event_ublk.so 00:08:54.866 LIB libspdk_event_nvmf.a 00:08:54.866 SO libspdk_event_nvmf.so.6.0 00:08:54.866 SYMLINK libspdk_event_nvmf.so 00:08:55.124 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:55.124 CC module/event/subsystems/iscsi/iscsi.o 00:08:55.124 LIB libspdk_event_vhost_scsi.a 00:08:55.383 SO libspdk_event_vhost_scsi.so.3.0 00:08:55.383 LIB libspdk_event_iscsi.a 00:08:55.383 SO libspdk_event_iscsi.so.6.0 00:08:55.383 SYMLINK libspdk_event_vhost_scsi.so 00:08:55.383 SYMLINK libspdk_event_iscsi.so 00:08:55.640 SO libspdk.so.6.0 00:08:55.640 SYMLINK libspdk.so 00:08:55.899 CC app/spdk_nvme_identify/identify.o 00:08:55.899 CC app/trace_record/trace_record.o 00:08:55.899 CC app/spdk_lspci/spdk_lspci.o 00:08:55.899 CXX app/trace/trace.o 00:08:55.899 CC app/spdk_nvme_perf/perf.o 00:08:55.899 CC app/spdk_tgt/spdk_tgt.o 00:08:55.899 CC app/nvmf_tgt/nvmf_main.o 00:08:55.899 CC app/iscsi_tgt/iscsi_tgt.o 00:08:55.899 LINK spdk_lspci 00:08:56.156 CC examples/util/zipf/zipf.o 00:08:56.156 CC test/thread/poller_perf/poller_perf.o 00:08:56.156 LINK nvmf_tgt 00:08:56.414 LINK zipf 00:08:56.414 LINK spdk_trace_record 00:08:56.414 LINK spdk_tgt 00:08:56.414 LINK iscsi_tgt 00:08:56.414 LINK poller_perf 00:08:56.672 LINK spdk_trace 00:08:56.672 CC test/dma/test_dma/test_dma.o 00:08:56.930 TEST_HEADER include/spdk/accel.h 00:08:56.930 TEST_HEADER include/spdk/accel_module.h 00:08:56.930 CC examples/ioat/perf/perf.o 00:08:56.930 TEST_HEADER include/spdk/assert.h 00:08:56.930 TEST_HEADER include/spdk/barrier.h 00:08:56.930 TEST_HEADER include/spdk/base64.h 00:08:56.930 TEST_HEADER include/spdk/bdev.h 00:08:56.930 TEST_HEADER include/spdk/bdev_module.h 00:08:56.930 TEST_HEADER include/spdk/bdev_zone.h 00:08:56.930 TEST_HEADER include/spdk/bit_array.h 00:08:56.930 TEST_HEADER include/spdk/bit_pool.h 00:08:56.930 TEST_HEADER include/spdk/blob_bdev.h 00:08:56.930 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:56.930 TEST_HEADER include/spdk/blobfs.h 00:08:56.930 TEST_HEADER include/spdk/blob.h 00:08:56.930 TEST_HEADER include/spdk/conf.h 00:08:56.930 TEST_HEADER include/spdk/config.h 00:08:56.930 TEST_HEADER include/spdk/cpuset.h 00:08:56.930 TEST_HEADER include/spdk/crc16.h 00:08:56.930 TEST_HEADER include/spdk/crc32.h 00:08:56.930 TEST_HEADER include/spdk/crc64.h 00:08:56.930 TEST_HEADER include/spdk/dif.h 00:08:56.930 TEST_HEADER include/spdk/dma.h 00:08:56.930 TEST_HEADER include/spdk/endian.h 00:08:56.930 TEST_HEADER include/spdk/env_dpdk.h 00:08:56.930 CC test/app/bdev_svc/bdev_svc.o 00:08:56.930 TEST_HEADER include/spdk/env.h 00:08:56.930 TEST_HEADER include/spdk/event.h 00:08:56.930 TEST_HEADER include/spdk/fd_group.h 00:08:56.930 TEST_HEADER include/spdk/fd.h 00:08:56.930 TEST_HEADER include/spdk/file.h 00:08:56.930 TEST_HEADER include/spdk/fsdev.h 00:08:56.930 TEST_HEADER include/spdk/fsdev_module.h 00:08:56.930 TEST_HEADER include/spdk/ftl.h 00:08:56.930 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:56.930 TEST_HEADER include/spdk/gpt_spec.h 00:08:56.930 TEST_HEADER include/spdk/hexlify.h 00:08:56.930 CC app/spdk_nvme_discover/discovery_aer.o 00:08:56.930 TEST_HEADER include/spdk/histogram_data.h 00:08:56.930 TEST_HEADER include/spdk/idxd.h 00:08:56.930 TEST_HEADER include/spdk/idxd_spec.h 00:08:56.930 CC examples/vmd/lsvmd/lsvmd.o 00:08:56.930 TEST_HEADER include/spdk/init.h 00:08:56.930 TEST_HEADER include/spdk/ioat.h 00:08:56.930 TEST_HEADER include/spdk/ioat_spec.h 00:08:56.930 TEST_HEADER include/spdk/iscsi_spec.h 00:08:56.930 TEST_HEADER include/spdk/json.h 00:08:56.930 TEST_HEADER include/spdk/jsonrpc.h 00:08:56.930 TEST_HEADER include/spdk/keyring.h 00:08:56.930 TEST_HEADER include/spdk/keyring_module.h 00:08:56.930 TEST_HEADER include/spdk/likely.h 00:08:56.930 TEST_HEADER include/spdk/log.h 00:08:56.930 TEST_HEADER include/spdk/lvol.h 00:08:56.930 TEST_HEADER include/spdk/md5.h 00:08:56.930 TEST_HEADER include/spdk/memory.h 00:08:56.930 TEST_HEADER include/spdk/mmio.h 00:08:56.930 TEST_HEADER include/spdk/nbd.h 00:08:56.930 TEST_HEADER include/spdk/net.h 00:08:57.188 TEST_HEADER include/spdk/notify.h 00:08:57.188 TEST_HEADER include/spdk/nvme.h 00:08:57.188 TEST_HEADER include/spdk/nvme_intel.h 00:08:57.188 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:57.188 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:57.188 TEST_HEADER include/spdk/nvme_spec.h 00:08:57.188 TEST_HEADER include/spdk/nvme_zns.h 00:08:57.188 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:57.188 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:57.188 TEST_HEADER include/spdk/nvmf.h 00:08:57.188 TEST_HEADER include/spdk/nvmf_spec.h 00:08:57.188 TEST_HEADER include/spdk/nvmf_transport.h 00:08:57.188 TEST_HEADER include/spdk/opal.h 00:08:57.188 TEST_HEADER include/spdk/opal_spec.h 00:08:57.188 TEST_HEADER include/spdk/pci_ids.h 00:08:57.188 TEST_HEADER include/spdk/pipe.h 00:08:57.188 TEST_HEADER include/spdk/queue.h 00:08:57.188 TEST_HEADER include/spdk/reduce.h 00:08:57.188 TEST_HEADER include/spdk/rpc.h 00:08:57.188 TEST_HEADER include/spdk/scheduler.h 00:08:57.188 TEST_HEADER include/spdk/scsi.h 00:08:57.188 TEST_HEADER include/spdk/scsi_spec.h 00:08:57.188 TEST_HEADER include/spdk/sock.h 00:08:57.188 TEST_HEADER include/spdk/stdinc.h 00:08:57.188 TEST_HEADER include/spdk/string.h 00:08:57.188 TEST_HEADER include/spdk/thread.h 00:08:57.188 TEST_HEADER include/spdk/trace.h 00:08:57.188 TEST_HEADER include/spdk/trace_parser.h 00:08:57.188 TEST_HEADER include/spdk/tree.h 00:08:57.188 TEST_HEADER include/spdk/ublk.h 00:08:57.188 TEST_HEADER include/spdk/util.h 00:08:57.188 TEST_HEADER include/spdk/uuid.h 00:08:57.188 TEST_HEADER include/spdk/version.h 00:08:57.188 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:57.188 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:57.188 TEST_HEADER include/spdk/vhost.h 00:08:57.188 TEST_HEADER include/spdk/vmd.h 00:08:57.188 TEST_HEADER include/spdk/xor.h 00:08:57.188 TEST_HEADER include/spdk/zipf.h 00:08:57.188 CXX test/cpp_headers/accel.o 00:08:57.188 LINK bdev_svc 00:08:57.188 LINK lsvmd 00:08:57.188 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:57.445 LINK spdk_nvme_discover 00:08:57.445 CC test/env/mem_callbacks/mem_callbacks.o 00:08:57.445 LINK ioat_perf 00:08:57.445 CXX test/cpp_headers/accel_module.o 00:08:57.703 LINK test_dma 00:08:57.703 CC examples/vmd/led/led.o 00:08:57.703 LINK spdk_nvme_identify 00:08:57.703 CXX test/cpp_headers/assert.o 00:08:57.703 LINK spdk_nvme_perf 00:08:57.703 CC examples/ioat/verify/verify.o 00:08:57.703 LINK led 00:08:57.703 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:57.703 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:57.960 LINK nvme_fuzz 00:08:57.960 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:57.960 CXX test/cpp_headers/barrier.o 00:08:57.960 CXX test/cpp_headers/base64.o 00:08:58.217 CXX test/cpp_headers/bdev.o 00:08:58.217 CC test/app/histogram_perf/histogram_perf.o 00:08:58.217 CC app/spdk_top/spdk_top.o 00:08:58.217 LINK mem_callbacks 00:08:58.217 LINK histogram_perf 00:08:58.217 LINK verify 00:08:58.475 CXX test/cpp_headers/bdev_module.o 00:08:58.475 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:58.475 CC app/vhost/vhost.o 00:08:58.475 LINK vhost_fuzz 00:08:58.475 CC examples/idxd/perf/perf.o 00:08:58.745 LINK interrupt_tgt 00:08:58.745 CC test/env/vtophys/vtophys.o 00:08:58.745 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:58.745 CC app/spdk_dd/spdk_dd.o 00:08:58.745 CXX test/cpp_headers/bdev_zone.o 00:08:58.745 LINK vhost 00:08:58.745 LINK vtophys 00:08:59.036 LINK env_dpdk_post_init 00:08:59.036 CC examples/sock/hello_world/hello_sock.o 00:08:59.036 CXX test/cpp_headers/bit_array.o 00:08:59.036 CC examples/thread/thread/thread_ex.o 00:08:59.293 LINK idxd_perf 00:08:59.293 LINK spdk_dd 00:08:59.293 CC test/app/jsoncat/jsoncat.o 00:08:59.293 CC test/env/memory/memory_ut.o 00:08:59.293 CXX test/cpp_headers/bit_pool.o 00:08:59.551 LINK hello_sock 00:08:59.551 CC test/event/event_perf/event_perf.o 00:08:59.551 LINK thread 00:08:59.551 CC test/event/reactor/reactor.o 00:08:59.551 LINK jsoncat 00:08:59.551 LINK event_perf 00:08:59.808 CXX test/cpp_headers/blob_bdev.o 00:08:59.808 LINK spdk_top 00:08:59.808 CC test/rpc_client/rpc_client_test.o 00:08:59.808 CC test/nvme/aer/aer.o 00:08:59.808 LINK reactor 00:09:00.066 CC examples/nvme/hello_world/hello_world.o 00:09:00.066 CXX test/cpp_headers/blobfs_bdev.o 00:09:00.066 CC test/accel/dif/dif.o 00:09:00.066 LINK rpc_client_test 00:09:00.066 CC test/event/reactor_perf/reactor_perf.o 00:09:00.066 LINK aer 00:09:00.324 CC test/blobfs/mkfs/mkfs.o 00:09:00.324 CC app/fio/nvme/fio_plugin.o 00:09:00.324 CXX test/cpp_headers/blobfs.o 00:09:00.324 LINK hello_world 00:09:00.582 LINK reactor_perf 00:09:00.582 CC test/event/app_repeat/app_repeat.o 00:09:00.582 LINK mkfs 00:09:00.582 CC test/nvme/reset/reset.o 00:09:00.582 CXX test/cpp_headers/blob.o 00:09:00.840 LINK app_repeat 00:09:00.840 CC examples/nvme/reconnect/reconnect.o 00:09:00.840 LINK iscsi_fuzz 00:09:00.840 CXX test/cpp_headers/conf.o 00:09:00.840 CC test/nvme/sgl/sgl.o 00:09:01.098 LINK reset 00:09:01.098 LINK spdk_nvme 00:09:01.356 CC app/fio/bdev/fio_plugin.o 00:09:01.356 CXX test/cpp_headers/config.o 00:09:01.356 CXX test/cpp_headers/cpuset.o 00:09:01.356 LINK memory_ut 00:09:01.356 CC test/event/scheduler/scheduler.o 00:09:01.614 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:01.614 CC test/app/stub/stub.o 00:09:01.614 LINK sgl 00:09:01.614 CXX test/cpp_headers/crc16.o 00:09:01.614 LINK dif 00:09:01.871 CC test/lvol/esnap/esnap.o 00:09:01.871 LINK reconnect 00:09:01.871 CC test/env/pci/pci_ut.o 00:09:01.871 LINK scheduler 00:09:01.871 LINK stub 00:09:01.871 CXX test/cpp_headers/crc32.o 00:09:01.871 CXX test/cpp_headers/crc64.o 00:09:02.128 CC test/nvme/e2edp/nvme_dp.o 00:09:02.386 CXX test/cpp_headers/dif.o 00:09:02.386 CC test/nvme/overhead/overhead.o 00:09:02.386 CC test/nvme/err_injection/err_injection.o 00:09:02.386 LINK spdk_bdev 00:09:02.386 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:02.643 CC test/bdev/bdevio/bdevio.o 00:09:02.643 LINK nvme_dp 00:09:02.643 CXX test/cpp_headers/dma.o 00:09:02.643 LINK pci_ut 00:09:02.898 LINK nvme_manage 00:09:02.898 CC test/nvme/startup/startup.o 00:09:02.898 LINK err_injection 00:09:02.898 CXX test/cpp_headers/endian.o 00:09:03.155 LINK hello_fsdev 00:09:03.155 LINK overhead 00:09:03.155 CC test/nvme/reserve/reserve.o 00:09:03.413 CXX test/cpp_headers/env_dpdk.o 00:09:03.413 CC examples/nvme/arbitration/arbitration.o 00:09:03.413 LINK bdevio 00:09:03.413 LINK startup 00:09:03.413 CC test/nvme/simple_copy/simple_copy.o 00:09:03.413 CC examples/nvme/hotplug/hotplug.o 00:09:03.413 CXX test/cpp_headers/env.o 00:09:03.671 LINK reserve 00:09:03.671 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:03.671 CXX test/cpp_headers/event.o 00:09:03.671 CC examples/accel/perf/accel_perf.o 00:09:03.929 CC examples/nvme/abort/abort.o 00:09:03.929 CC test/nvme/connect_stress/connect_stress.o 00:09:03.929 LINK cmb_copy 00:09:03.929 CXX test/cpp_headers/fd_group.o 00:09:03.929 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:03.929 LINK hotplug 00:09:04.187 LINK simple_copy 00:09:04.187 LINK arbitration 00:09:04.187 LINK pmr_persistence 00:09:04.187 LINK connect_stress 00:09:04.445 CXX test/cpp_headers/fd.o 00:09:04.445 CXX test/cpp_headers/file.o 00:09:04.445 CC examples/blob/hello_world/hello_blob.o 00:09:04.445 CC examples/blob/cli/blobcli.o 00:09:04.703 CC test/nvme/boot_partition/boot_partition.o 00:09:04.703 CXX test/cpp_headers/fsdev.o 00:09:04.703 LINK accel_perf 00:09:04.703 CC test/nvme/compliance/nvme_compliance.o 00:09:04.703 LINK abort 00:09:04.703 CC test/nvme/fused_ordering/fused_ordering.o 00:09:04.961 LINK boot_partition 00:09:04.961 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:04.961 LINK hello_blob 00:09:04.961 CXX test/cpp_headers/fsdev_module.o 00:09:04.961 LINK fused_ordering 00:09:05.219 CXX test/cpp_headers/ftl.o 00:09:05.219 LINK nvme_compliance 00:09:05.219 LINK doorbell_aers 00:09:05.219 CXX test/cpp_headers/fuse_dispatcher.o 00:09:05.219 CC examples/bdev/hello_world/hello_bdev.o 00:09:05.477 CC examples/bdev/bdevperf/bdevperf.o 00:09:05.477 CXX test/cpp_headers/gpt_spec.o 00:09:05.477 CC test/nvme/cuse/cuse.o 00:09:05.477 CC test/nvme/fdp/fdp.o 00:09:05.477 CXX test/cpp_headers/hexlify.o 00:09:05.477 CXX test/cpp_headers/histogram_data.o 00:09:05.734 CXX test/cpp_headers/idxd.o 00:09:05.734 LINK hello_bdev 00:09:05.734 LINK blobcli 00:09:05.734 CXX test/cpp_headers/idxd_spec.o 00:09:05.734 CXX test/cpp_headers/init.o 00:09:05.992 CXX test/cpp_headers/ioat.o 00:09:05.992 CXX test/cpp_headers/ioat_spec.o 00:09:05.992 CXX test/cpp_headers/iscsi_spec.o 00:09:05.992 CXX test/cpp_headers/json.o 00:09:05.992 CXX test/cpp_headers/jsonrpc.o 00:09:06.249 CXX test/cpp_headers/keyring.o 00:09:06.249 CXX test/cpp_headers/keyring_module.o 00:09:06.249 LINK fdp 00:09:06.249 CXX test/cpp_headers/likely.o 00:09:06.249 CXX test/cpp_headers/log.o 00:09:06.249 CXX test/cpp_headers/lvol.o 00:09:06.249 CXX test/cpp_headers/md5.o 00:09:06.249 CXX test/cpp_headers/memory.o 00:09:06.507 CXX test/cpp_headers/mmio.o 00:09:06.507 CXX test/cpp_headers/nbd.o 00:09:06.507 CXX test/cpp_headers/net.o 00:09:06.507 CXX test/cpp_headers/notify.o 00:09:06.507 CXX test/cpp_headers/nvme.o 00:09:06.507 CXX test/cpp_headers/nvme_intel.o 00:09:06.507 CXX test/cpp_headers/nvme_ocssd.o 00:09:06.765 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:06.765 CXX test/cpp_headers/nvme_spec.o 00:09:06.765 CXX test/cpp_headers/nvme_zns.o 00:09:06.765 CXX test/cpp_headers/nvmf_cmd.o 00:09:06.765 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:06.765 CXX test/cpp_headers/nvmf.o 00:09:07.067 CXX test/cpp_headers/nvmf_spec.o 00:09:07.067 CXX test/cpp_headers/nvmf_transport.o 00:09:07.067 CXX test/cpp_headers/opal.o 00:09:07.067 CXX test/cpp_headers/opal_spec.o 00:09:07.067 CXX test/cpp_headers/pci_ids.o 00:09:07.348 CXX test/cpp_headers/pipe.o 00:09:07.348 CXX test/cpp_headers/queue.o 00:09:07.348 LINK bdevperf 00:09:07.348 CXX test/cpp_headers/reduce.o 00:09:07.348 CXX test/cpp_headers/rpc.o 00:09:07.348 CXX test/cpp_headers/scheduler.o 00:09:07.348 CXX test/cpp_headers/scsi_spec.o 00:09:07.348 CXX test/cpp_headers/scsi.o 00:09:07.348 CXX test/cpp_headers/sock.o 00:09:07.606 CXX test/cpp_headers/stdinc.o 00:09:07.606 CXX test/cpp_headers/string.o 00:09:07.606 CXX test/cpp_headers/thread.o 00:09:07.606 CXX test/cpp_headers/trace.o 00:09:07.864 CXX test/cpp_headers/trace_parser.o 00:09:07.864 CXX test/cpp_headers/tree.o 00:09:07.864 CXX test/cpp_headers/ublk.o 00:09:07.864 CXX test/cpp_headers/util.o 00:09:07.864 CXX test/cpp_headers/uuid.o 00:09:07.864 CXX test/cpp_headers/version.o 00:09:07.864 CXX test/cpp_headers/vfio_user_pci.o 00:09:07.864 CXX test/cpp_headers/vfio_user_spec.o 00:09:08.121 CXX test/cpp_headers/vhost.o 00:09:08.121 CXX test/cpp_headers/vmd.o 00:09:08.121 CXX test/cpp_headers/xor.o 00:09:08.121 CXX test/cpp_headers/zipf.o 00:09:08.121 CC examples/nvmf/nvmf/nvmf.o 00:09:08.380 LINK cuse 00:09:08.637 LINK nvmf 00:09:11.917 LINK esnap 00:09:11.917 00:09:11.917 real 2m22.601s 00:09:11.917 user 13m16.693s 00:09:11.917 sys 2m37.018s 00:09:11.917 13:42:58 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:09:11.917 13:42:58 make -- common/autotest_common.sh@10 -- $ set +x 00:09:11.917 ************************************ 00:09:11.917 END TEST make 00:09:11.917 ************************************ 00:09:11.917 13:42:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:11.917 13:42:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:11.917 13:42:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:11.917 13:42:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.917 13:42:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:11.917 13:42:58 -- pm/common@44 -- $ pid=5346 00:09:11.917 13:42:58 -- pm/common@50 -- $ kill -TERM 5346 00:09:11.917 13:42:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:11.917 13:42:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:11.917 13:42:58 -- pm/common@44 -- $ pid=5348 00:09:11.917 13:42:58 -- pm/common@50 -- $ kill -TERM 5348 00:09:11.917 13:42:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:11.917 13:42:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:12.175 13:42:58 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.175 13:42:58 -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.175 13:42:58 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.175 13:42:58 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.175 13:42:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.175 13:42:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.175 13:42:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.175 13:42:58 -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.175 13:42:58 -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.175 13:42:58 -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.175 13:42:58 -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.175 13:42:58 -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.175 13:42:58 -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.176 13:42:58 -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.176 13:42:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.176 13:42:58 -- scripts/common.sh@344 -- # case "$op" in 00:09:12.176 13:42:58 -- scripts/common.sh@345 -- # : 1 00:09:12.176 13:42:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.176 13:42:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.176 13:42:58 -- scripts/common.sh@365 -- # decimal 1 00:09:12.176 13:42:58 -- scripts/common.sh@353 -- # local d=1 00:09:12.176 13:42:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.176 13:42:58 -- scripts/common.sh@355 -- # echo 1 00:09:12.176 13:42:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.176 13:42:58 -- scripts/common.sh@366 -- # decimal 2 00:09:12.176 13:42:58 -- scripts/common.sh@353 -- # local d=2 00:09:12.176 13:42:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.176 13:42:58 -- scripts/common.sh@355 -- # echo 2 00:09:12.176 13:42:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.176 13:42:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.176 13:42:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.176 13:42:58 -- scripts/common.sh@368 -- # return 0 00:09:12.176 13:42:58 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.176 13:42:58 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.176 --rc genhtml_branch_coverage=1 00:09:12.176 --rc genhtml_function_coverage=1 00:09:12.176 --rc genhtml_legend=1 00:09:12.176 --rc geninfo_all_blocks=1 00:09:12.176 --rc geninfo_unexecuted_blocks=1 00:09:12.176 00:09:12.176 ' 00:09:12.176 13:42:58 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.176 --rc genhtml_branch_coverage=1 00:09:12.176 --rc genhtml_function_coverage=1 00:09:12.176 --rc genhtml_legend=1 00:09:12.176 --rc geninfo_all_blocks=1 00:09:12.176 --rc geninfo_unexecuted_blocks=1 00:09:12.176 00:09:12.176 ' 00:09:12.176 13:42:58 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.176 --rc genhtml_branch_coverage=1 00:09:12.176 --rc genhtml_function_coverage=1 00:09:12.176 --rc genhtml_legend=1 00:09:12.176 --rc geninfo_all_blocks=1 00:09:12.176 --rc geninfo_unexecuted_blocks=1 00:09:12.176 00:09:12.176 ' 00:09:12.176 13:42:58 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.176 --rc genhtml_branch_coverage=1 00:09:12.176 --rc genhtml_function_coverage=1 00:09:12.176 --rc genhtml_legend=1 00:09:12.176 --rc geninfo_all_blocks=1 00:09:12.176 --rc geninfo_unexecuted_blocks=1 00:09:12.176 00:09:12.176 ' 00:09:12.176 13:42:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.176 13:42:58 -- nvmf/common.sh@7 -- # uname -s 00:09:12.176 13:42:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.176 13:42:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.176 13:42:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.176 13:42:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.176 13:42:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.176 13:42:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.176 13:42:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.176 13:42:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.176 13:42:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.176 13:42:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.176 13:42:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:09:12.176 13:42:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:09:12.176 13:42:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.176 13:42:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.176 13:42:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:12.176 13:42:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.176 13:42:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.176 13:42:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.176 13:42:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.176 13:42:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.176 13:42:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.176 13:42:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.176 13:42:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.176 13:42:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.176 13:42:58 -- paths/export.sh@5 -- # export PATH 00:09:12.176 13:42:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.176 13:42:58 -- nvmf/common.sh@51 -- # : 0 00:09:12.176 13:42:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.176 13:42:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.176 13:42:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.176 13:42:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.176 13:42:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.176 13:42:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.176 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.176 13:42:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.176 13:42:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.176 13:42:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.176 13:42:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:12.176 13:42:58 -- spdk/autotest.sh@32 -- # uname -s 00:09:12.176 13:42:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:12.176 13:42:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:12.176 13:42:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:12.176 13:42:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:12.176 13:42:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:12.176 13:42:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:12.176 13:42:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:12.176 13:42:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:12.176 13:42:59 -- spdk/autotest.sh@48 -- # udevadm_pid=55388 00:09:12.176 13:42:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:12.176 13:42:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:12.176 13:42:59 -- pm/common@17 -- # local monitor 00:09:12.176 13:42:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.176 13:42:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:12.176 13:42:59 -- pm/common@25 -- # sleep 1 00:09:12.176 13:42:59 -- pm/common@21 -- # date +%s 00:09:12.176 13:42:59 -- pm/common@21 -- # date +%s 00:09:12.176 13:42:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730727779 00:09:12.176 13:42:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730727779 00:09:12.176 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730727779_collect-cpu-load.pm.log 00:09:12.176 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730727779_collect-vmstat.pm.log 00:09:13.139 13:43:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:13.139 13:43:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:13.139 13:43:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.139 13:43:00 -- common/autotest_common.sh@10 -- # set +x 00:09:13.139 13:43:00 -- spdk/autotest.sh@59 -- # create_test_list 00:09:13.139 13:43:00 -- common/autotest_common.sh@750 -- # xtrace_disable 00:09:13.139 13:43:00 -- common/autotest_common.sh@10 -- # set +x 00:09:13.398 13:43:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:13.398 13:43:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:13.398 13:43:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:13.398 13:43:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:13.398 13:43:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:13.398 13:43:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:13.398 13:43:00 -- common/autotest_common.sh@1455 -- # uname 00:09:13.398 13:43:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:09:13.398 13:43:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:13.398 13:43:00 -- common/autotest_common.sh@1475 -- # uname 00:09:13.398 13:43:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:09:13.398 13:43:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:13.398 13:43:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:13.398 lcov: LCOV version 1.15 00:09:13.398 13:43:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:31.468 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:31.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:49.581 13:43:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:49.581 13:43:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.581 13:43:35 -- common/autotest_common.sh@10 -- # set +x 00:09:49.581 13:43:35 -- spdk/autotest.sh@78 -- # rm -f 00:09:49.581 13:43:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:49.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.147 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:50.147 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:50.147 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:09:50.147 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:09:50.147 13:43:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:50.147 13:43:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:50.147 13:43:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:50.147 13:43:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:50.147 13:43:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:09:50.147 13:43:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:50.147 13:43:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:50.147 13:43:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:50.147 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.147 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.147 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:50.147 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:50.147 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:50.404 No valid GPT data, bailing 00:09:50.404 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:50.404 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.404 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.404 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:50.404 1+0 records in 00:09:50.404 1+0 records out 00:09:50.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127876 s, 82.0 MB/s 00:09:50.404 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.404 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.404 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:50.404 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:50.404 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:50.404 No valid GPT data, bailing 00:09:50.404 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:50.404 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.404 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.404 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:50.404 1+0 records in 00:09:50.404 1+0 records out 00:09:50.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058044 s, 181 MB/s 00:09:50.404 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.404 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.404 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:09:50.404 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:09:50.404 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:09:50.404 No valid GPT data, bailing 00:09:50.405 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:09:50.405 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.405 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.405 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:09:50.405 1+0 records in 00:09:50.405 1+0 records out 00:09:50.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602008 s, 174 MB/s 00:09:50.405 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.405 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.405 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:09:50.405 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:09:50.405 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:09:50.661 No valid GPT data, bailing 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.661 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.661 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:09:50.661 1+0 records in 00:09:50.661 1+0 records out 00:09:50.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450669 s, 233 MB/s 00:09:50.661 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.661 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.661 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:09:50.661 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:09:50.661 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:09:50.661 No valid GPT data, bailing 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.661 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.661 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:09:50.661 1+0 records in 00:09:50.661 1+0 records out 00:09:50.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411598 s, 255 MB/s 00:09:50.661 13:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:50.661 13:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:50.661 13:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:09:50.661 13:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:09:50.661 13:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:09:50.661 No valid GPT data, bailing 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:09:50.661 13:43:37 -- scripts/common.sh@394 -- # pt= 00:09:50.661 13:43:37 -- scripts/common.sh@395 -- # return 1 00:09:50.661 13:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:09:50.661 1+0 records in 00:09:50.661 1+0 records out 00:09:50.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469658 s, 223 MB/s 00:09:50.661 13:43:37 -- spdk/autotest.sh@105 -- # sync 00:09:50.661 13:43:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:50.661 13:43:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:50.661 13:43:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:53.183 13:43:39 -- spdk/autotest.sh@111 -- # uname -s 00:09:53.183 13:43:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:53.183 13:43:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:53.183 13:43:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:53.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.061 Hugepages 00:09:54.061 node hugesize free / total 00:09:54.061 node0 1048576kB 0 / 0 00:09:54.319 node0 2048kB 0 / 0 00:09:54.319 00:09:54.319 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:54.319 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:54.319 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:54.319 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:54.578 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:09:54.578 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:54.578 13:43:41 -- spdk/autotest.sh@117 -- # uname -s 00:09:54.578 13:43:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:54.578 13:43:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:54.578 13:43:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:55.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.078 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.078 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.078 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.078 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.078 13:43:42 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:57.014 13:43:43 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:57.014 13:43:43 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:57.014 13:43:43 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:57.014 13:43:43 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:57.014 13:43:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:57.014 13:43:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:57.014 13:43:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:57.014 13:43:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:57.014 13:43:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:57.273 13:43:43 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:57.273 13:43:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.273 13:43:43 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:57.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.789 Waiting for block devices as requested 00:09:57.789 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.047 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.047 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.312 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.312 13:43:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:03.312 13:43:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:03.312 13:43:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:10:03.312 13:43:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:03.312 13:43:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:03.312 13:43:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:03.312 13:43:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:03.312 13:43:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:03.312 13:43:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:03.312 13:43:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:10:03.312 13:43:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:03.312 13:43:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:03.312 13:43:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:03.312 13:43:50 -- common/autotest_common.sh@1541 -- # continue 00:10:03.312 13:43:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:03.312 13:43:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:10:03.312 13:43:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:03.312 13:43:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:03.312 13:43:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:03.312 13:43:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:10:03.312 13:43:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:10:03.313 13:43:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:03.313 13:43:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1541 -- # continue 00:10:03.313 13:43:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:03.313 13:43:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:03.313 13:43:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1541 -- # continue 00:10:03.313 13:43:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:03.313 13:43:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:03.313 13:43:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:03.313 13:43:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:03.313 13:43:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:03.313 13:43:50 -- common/autotest_common.sh@1541 -- # continue 00:10:03.313 13:43:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:03.313 13:43:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.313 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:10:03.571 13:43:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:03.571 13:43:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.571 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:10:03.571 13:43:50 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:04.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:04.701 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.701 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.701 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.959 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.959 13:43:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:04.959 13:43:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.959 13:43:51 -- common/autotest_common.sh@10 -- # set +x 00:10:04.959 13:43:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:04.959 13:43:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:10:04.959 13:43:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:10:04.959 13:43:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:10:04.959 13:43:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:10:04.959 13:43:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:10:04.959 13:43:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:10:04.959 13:43:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:10:04.959 13:43:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:04.959 13:43:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:04.959 13:43:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.959 13:43:51 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.959 13:43:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:04.959 13:43:51 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:04.959 13:43:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:04.959 13:43:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:04.959 13:43:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:04.959 13:43:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:04.959 13:43:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:04.959 13:43:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:04.959 13:43:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:04.959 13:43:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:10:04.959 13:43:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:10:04.959 13:43:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:04.959 13:43:51 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:10:04.959 13:43:51 -- common/autotest_common.sh@1570 -- # return 0 00:10:04.959 13:43:51 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:10:04.959 13:43:51 -- common/autotest_common.sh@1578 -- # return 0 00:10:04.959 13:43:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:04.959 13:43:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:04.959 13:43:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:04.959 13:43:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:04.959 13:43:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:04.959 13:43:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.959 13:43:51 -- common/autotest_common.sh@10 -- # set +x 00:10:04.959 13:43:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:04.959 13:43:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:04.959 13:43:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.959 13:43:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.959 13:43:51 -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 ************************************ 00:10:05.217 START TEST env 00:10:05.217 ************************************ 00:10:05.217 13:43:51 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:05.217 * Looking for test storage... 00:10:05.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:05.217 13:43:51 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.217 13:43:51 env -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.217 13:43:51 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.217 13:43:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.217 13:43:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.217 13:43:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.217 13:43:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.217 13:43:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.217 13:43:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.217 13:43:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.217 13:43:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.217 13:43:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.217 13:43:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.217 13:43:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.217 13:43:52 env -- scripts/common.sh@344 -- # case "$op" in 00:10:05.217 13:43:52 env -- scripts/common.sh@345 -- # : 1 00:10:05.217 13:43:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.217 13:43:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.217 13:43:52 env -- scripts/common.sh@365 -- # decimal 1 00:10:05.217 13:43:52 env -- scripts/common.sh@353 -- # local d=1 00:10:05.217 13:43:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.217 13:43:52 env -- scripts/common.sh@355 -- # echo 1 00:10:05.217 13:43:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.217 13:43:52 env -- scripts/common.sh@366 -- # decimal 2 00:10:05.217 13:43:52 env -- scripts/common.sh@353 -- # local d=2 00:10:05.217 13:43:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.217 13:43:52 env -- scripts/common.sh@355 -- # echo 2 00:10:05.217 13:43:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.217 13:43:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.217 13:43:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.217 13:43:52 env -- scripts/common.sh@368 -- # return 0 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.217 --rc genhtml_branch_coverage=1 00:10:05.217 --rc genhtml_function_coverage=1 00:10:05.217 --rc genhtml_legend=1 00:10:05.217 --rc geninfo_all_blocks=1 00:10:05.217 --rc geninfo_unexecuted_blocks=1 00:10:05.217 00:10:05.217 ' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.217 --rc genhtml_branch_coverage=1 00:10:05.217 --rc genhtml_function_coverage=1 00:10:05.217 --rc genhtml_legend=1 00:10:05.217 --rc geninfo_all_blocks=1 00:10:05.217 --rc geninfo_unexecuted_blocks=1 00:10:05.217 00:10:05.217 ' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.217 --rc genhtml_branch_coverage=1 00:10:05.217 --rc genhtml_function_coverage=1 00:10:05.217 --rc genhtml_legend=1 00:10:05.217 --rc geninfo_all_blocks=1 00:10:05.217 --rc geninfo_unexecuted_blocks=1 00:10:05.217 00:10:05.217 ' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.217 --rc genhtml_branch_coverage=1 00:10:05.217 --rc genhtml_function_coverage=1 00:10:05.217 --rc genhtml_legend=1 00:10:05.217 --rc geninfo_all_blocks=1 00:10:05.217 --rc geninfo_unexecuted_blocks=1 00:10:05.217 00:10:05.217 ' 00:10:05.217 13:43:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.217 13:43:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.217 13:43:52 env -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 ************************************ 00:10:05.217 START TEST env_memory 00:10:05.217 ************************************ 00:10:05.217 13:43:52 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:05.217 00:10:05.217 00:10:05.217 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.217 http://cunit.sourceforge.net/ 00:10:05.217 00:10:05.217 00:10:05.217 Suite: memory 00:10:05.475 Test: alloc and free memory map ...[2024-11-04 13:43:52.148614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:05.475 passed 00:10:05.475 Test: mem map translation ...[2024-11-04 13:43:52.207627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:05.475 [2024-11-04 13:43:52.207735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:05.475 [2024-11-04 13:43:52.207834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:05.475 [2024-11-04 13:43:52.207884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:05.475 passed 00:10:05.475 Test: mem map registration ...[2024-11-04 13:43:52.285593] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:05.475 [2024-11-04 13:43:52.285682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:05.475 passed 00:10:05.475 Test: mem map adjacent registrations ...passed 00:10:05.475 00:10:05.475 Run Summary: Type Total Ran Passed Failed Inactive 00:10:05.475 suites 1 1 n/a 0 0 00:10:05.475 tests 4 4 4 0 0 00:10:05.475 asserts 152 152 152 0 n/a 00:10:05.475 00:10:05.475 Elapsed time = 0.283 seconds 00:10:05.734 00:10:05.734 real 0m0.323s 00:10:05.734 user 0m0.286s 00:10:05.734 sys 0m0.031s 00:10:05.734 13:43:52 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.734 13:43:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:05.734 ************************************ 00:10:05.734 END TEST env_memory 00:10:05.734 ************************************ 00:10:05.734 13:43:52 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:05.734 13:43:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.734 13:43:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.734 13:43:52 env -- common/autotest_common.sh@10 -- # set +x 00:10:05.734 ************************************ 00:10:05.734 START TEST env_vtophys 00:10:05.734 ************************************ 00:10:05.734 13:43:52 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:05.734 EAL: lib.eal log level changed from notice to debug 00:10:05.734 EAL: Detected lcore 0 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 1 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 2 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 3 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 4 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 5 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 6 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 7 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 8 as core 0 on socket 0 00:10:05.734 EAL: Detected lcore 9 as core 0 on socket 0 00:10:05.734 EAL: Maximum logical cores by configuration: 128 00:10:05.734 EAL: Detected CPU lcores: 10 00:10:05.734 EAL: Detected NUMA nodes: 1 00:10:05.734 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:05.735 EAL: Detected shared linkage of DPDK 00:10:05.735 EAL: No shared files mode enabled, IPC will be disabled 00:10:05.735 EAL: Selected IOVA mode 'PA' 00:10:05.735 EAL: Probing VFIO support... 00:10:05.735 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:05.735 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:05.735 EAL: Ask a virtual area of 0x2e000 bytes 00:10:05.735 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:05.735 EAL: Setting up physically contiguous memory... 00:10:05.735 EAL: Setting maximum number of open files to 524288 00:10:05.735 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:05.735 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:05.735 EAL: Ask a virtual area of 0x61000 bytes 00:10:05.735 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:05.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:05.735 EAL: Ask a virtual area of 0x400000000 bytes 00:10:05.735 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:05.735 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:05.735 EAL: Ask a virtual area of 0x61000 bytes 00:10:05.735 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:05.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:05.735 EAL: Ask a virtual area of 0x400000000 bytes 00:10:05.735 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:05.735 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:05.735 EAL: Ask a virtual area of 0x61000 bytes 00:10:05.735 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:05.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:05.735 EAL: Ask a virtual area of 0x400000000 bytes 00:10:05.735 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:05.735 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:05.735 EAL: Ask a virtual area of 0x61000 bytes 00:10:05.735 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:05.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:05.735 EAL: Ask a virtual area of 0x400000000 bytes 00:10:05.735 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:05.735 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:05.735 EAL: Hugepages will be freed exactly as allocated. 00:10:05.735 EAL: No shared files mode enabled, IPC is disabled 00:10:05.735 EAL: No shared files mode enabled, IPC is disabled 00:10:05.992 EAL: TSC frequency is ~2100000 KHz 00:10:05.993 EAL: Main lcore 0 is ready (tid=7f363780da40;cpuset=[0]) 00:10:05.993 EAL: Trying to obtain current memory policy. 00:10:05.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:05.993 EAL: Restoring previous memory policy: 0 00:10:05.993 EAL: request: mp_malloc_sync 00:10:05.993 EAL: No shared files mode enabled, IPC is disabled 00:10:05.993 EAL: Heap on socket 0 was expanded by 2MB 00:10:05.993 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:05.993 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:05.993 EAL: Mem event callback 'spdk:(nil)' registered 00:10:05.993 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:05.993 00:10:05.993 00:10:05.993 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.993 http://cunit.sourceforge.net/ 00:10:05.993 00:10:05.993 00:10:05.993 Suite: components_suite 00:10:06.563 Test: vtophys_malloc_test ...passed 00:10:06.563 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:06.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.563 EAL: Restoring previous memory policy: 4 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was expanded by 4MB 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was shrunk by 4MB 00:10:06.563 EAL: Trying to obtain current memory policy. 00:10:06.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.563 EAL: Restoring previous memory policy: 4 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was expanded by 6MB 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was shrunk by 6MB 00:10:06.563 EAL: Trying to obtain current memory policy. 00:10:06.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.563 EAL: Restoring previous memory policy: 4 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was expanded by 10MB 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was shrunk by 10MB 00:10:06.563 EAL: Trying to obtain current memory policy. 00:10:06.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.563 EAL: Restoring previous memory policy: 4 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was expanded by 18MB 00:10:06.563 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.563 EAL: request: mp_malloc_sync 00:10:06.563 EAL: No shared files mode enabled, IPC is disabled 00:10:06.563 EAL: Heap on socket 0 was shrunk by 18MB 00:10:06.821 EAL: Trying to obtain current memory policy. 00:10:06.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.821 EAL: Restoring previous memory policy: 4 00:10:06.821 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.821 EAL: request: mp_malloc_sync 00:10:06.821 EAL: No shared files mode enabled, IPC is disabled 00:10:06.821 EAL: Heap on socket 0 was expanded by 34MB 00:10:06.821 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.821 EAL: request: mp_malloc_sync 00:10:06.821 EAL: No shared files mode enabled, IPC is disabled 00:10:06.821 EAL: Heap on socket 0 was shrunk by 34MB 00:10:06.821 EAL: Trying to obtain current memory policy. 00:10:06.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:06.821 EAL: Restoring previous memory policy: 4 00:10:06.821 EAL: Calling mem event callback 'spdk:(nil)' 00:10:06.821 EAL: request: mp_malloc_sync 00:10:06.821 EAL: No shared files mode enabled, IPC is disabled 00:10:06.821 EAL: Heap on socket 0 was expanded by 66MB 00:10:07.080 EAL: Calling mem event callback 'spdk:(nil)' 00:10:07.080 EAL: request: mp_malloc_sync 00:10:07.080 EAL: No shared files mode enabled, IPC is disabled 00:10:07.080 EAL: Heap on socket 0 was shrunk by 66MB 00:10:07.080 EAL: Trying to obtain current memory policy. 00:10:07.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:07.080 EAL: Restoring previous memory policy: 4 00:10:07.080 EAL: Calling mem event callback 'spdk:(nil)' 00:10:07.080 EAL: request: mp_malloc_sync 00:10:07.080 EAL: No shared files mode enabled, IPC is disabled 00:10:07.080 EAL: Heap on socket 0 was expanded by 130MB 00:10:07.338 EAL: Calling mem event callback 'spdk:(nil)' 00:10:07.338 EAL: request: mp_malloc_sync 00:10:07.338 EAL: No shared files mode enabled, IPC is disabled 00:10:07.338 EAL: Heap on socket 0 was shrunk by 130MB 00:10:07.597 EAL: Trying to obtain current memory policy. 00:10:07.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:07.597 EAL: Restoring previous memory policy: 4 00:10:07.597 EAL: Calling mem event callback 'spdk:(nil)' 00:10:07.597 EAL: request: mp_malloc_sync 00:10:07.597 EAL: No shared files mode enabled, IPC is disabled 00:10:07.597 EAL: Heap on socket 0 was expanded by 258MB 00:10:08.163 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.163 EAL: request: mp_malloc_sync 00:10:08.163 EAL: No shared files mode enabled, IPC is disabled 00:10:08.163 EAL: Heap on socket 0 was shrunk by 258MB 00:10:08.729 EAL: Trying to obtain current memory policy. 00:10:08.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:08.729 EAL: Restoring previous memory policy: 4 00:10:08.729 EAL: Calling mem event callback 'spdk:(nil)' 00:10:08.729 EAL: request: mp_malloc_sync 00:10:08.729 EAL: No shared files mode enabled, IPC is disabled 00:10:08.729 EAL: Heap on socket 0 was expanded by 514MB 00:10:09.664 EAL: Calling mem event callback 'spdk:(nil)' 00:10:09.922 EAL: request: mp_malloc_sync 00:10:09.922 EAL: No shared files mode enabled, IPC is disabled 00:10:09.922 EAL: Heap on socket 0 was shrunk by 514MB 00:10:10.858 EAL: Trying to obtain current memory policy. 00:10:10.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:11.117 EAL: Restoring previous memory policy: 4 00:10:11.117 EAL: Calling mem event callback 'spdk:(nil)' 00:10:11.117 EAL: request: mp_malloc_sync 00:10:11.117 EAL: No shared files mode enabled, IPC is disabled 00:10:11.117 EAL: Heap on socket 0 was expanded by 1026MB 00:10:13.016 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.276 EAL: request: mp_malloc_sync 00:10:13.276 EAL: No shared files mode enabled, IPC is disabled 00:10:13.276 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:15.226 passed 00:10:15.226 00:10:15.226 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.226 suites 1 1 n/a 0 0 00:10:15.226 tests 2 2 2 0 0 00:10:15.226 asserts 5712 5712 5712 0 n/a 00:10:15.226 00:10:15.226 Elapsed time = 9.129 seconds 00:10:15.226 EAL: Calling mem event callback 'spdk:(nil)' 00:10:15.226 EAL: request: mp_malloc_sync 00:10:15.226 EAL: No shared files mode enabled, IPC is disabled 00:10:15.226 EAL: Heap on socket 0 was shrunk by 2MB 00:10:15.226 EAL: No shared files mode enabled, IPC is disabled 00:10:15.226 EAL: No shared files mode enabled, IPC is disabled 00:10:15.226 EAL: No shared files mode enabled, IPC is disabled 00:10:15.226 00:10:15.226 real 0m9.495s 00:10:15.226 user 0m8.301s 00:10:15.226 sys 0m1.022s 00:10:15.226 13:44:01 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.226 13:44:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:15.226 ************************************ 00:10:15.226 END TEST env_vtophys 00:10:15.226 ************************************ 00:10:15.226 13:44:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:15.226 13:44:02 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:15.226 13:44:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.226 13:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:10:15.226 ************************************ 00:10:15.226 START TEST env_pci 00:10:15.226 ************************************ 00:10:15.226 13:44:02 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:15.226 00:10:15.226 00:10:15.226 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.226 http://cunit.sourceforge.net/ 00:10:15.226 00:10:15.226 00:10:15.226 Suite: pci 00:10:15.226 Test: pci_hook ...[2024-11-04 13:44:02.046216] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58280 has claimed it 00:10:15.226 EAL: Cannot find device (10000:00:01.0) 00:10:15.226 EAL: Failed to attach device on primary process 00:10:15.226 passed 00:10:15.226 00:10:15.226 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.226 suites 1 1 n/a 0 0 00:10:15.226 tests 1 1 1 0 0 00:10:15.226 asserts 25 25 25 0 n/a 00:10:15.226 00:10:15.227 Elapsed time = 0.007 seconds 00:10:15.227 00:10:15.227 real 0m0.083s 00:10:15.227 user 0m0.038s 00:10:15.227 sys 0m0.045s 00:10:15.227 13:44:02 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.227 13:44:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:15.227 ************************************ 00:10:15.227 END TEST env_pci 00:10:15.227 ************************************ 00:10:15.227 13:44:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:15.227 13:44:02 env -- env/env.sh@15 -- # uname 00:10:15.227 13:44:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:15.227 13:44:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:15.227 13:44:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:15.227 13:44:02 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:15.227 13:44:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.227 13:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:10:15.484 ************************************ 00:10:15.484 START TEST env_dpdk_post_init 00:10:15.484 ************************************ 00:10:15.484 13:44:02 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:15.484 EAL: Detected CPU lcores: 10 00:10:15.484 EAL: Detected NUMA nodes: 1 00:10:15.484 EAL: Detected shared linkage of DPDK 00:10:15.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:15.484 EAL: Selected IOVA mode 'PA' 00:10:15.484 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:15.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:15.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:15.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:10:15.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:10:15.743 Starting DPDK initialization... 00:10:15.743 Starting SPDK post initialization... 00:10:15.743 SPDK NVMe probe 00:10:15.743 Attaching to 0000:00:10.0 00:10:15.743 Attaching to 0000:00:11.0 00:10:15.743 Attaching to 0000:00:12.0 00:10:15.743 Attaching to 0000:00:13.0 00:10:15.743 Attached to 0000:00:10.0 00:10:15.743 Attached to 0000:00:11.0 00:10:15.743 Attached to 0000:00:13.0 00:10:15.743 Attached to 0000:00:12.0 00:10:15.743 Cleaning up... 00:10:15.743 00:10:15.743 real 0m0.348s 00:10:15.743 user 0m0.134s 00:10:15.743 sys 0m0.116s 00:10:15.743 13:44:02 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.743 13:44:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:15.743 ************************************ 00:10:15.743 END TEST env_dpdk_post_init 00:10:15.743 ************************************ 00:10:15.743 13:44:02 env -- env/env.sh@26 -- # uname 00:10:15.743 13:44:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:15.743 13:44:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:15.743 13:44:02 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:15.743 13:44:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.743 13:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:10:15.743 ************************************ 00:10:15.743 START TEST env_mem_callbacks 00:10:15.743 ************************************ 00:10:15.743 13:44:02 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:15.743 EAL: Detected CPU lcores: 10 00:10:15.743 EAL: Detected NUMA nodes: 1 00:10:15.743 EAL: Detected shared linkage of DPDK 00:10:15.743 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:16.045 EAL: Selected IOVA mode 'PA' 00:10:16.045 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:16.045 00:10:16.045 00:10:16.045 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.045 http://cunit.sourceforge.net/ 00:10:16.045 00:10:16.045 00:10:16.045 Suite: memory 00:10:16.045 Test: test ... 00:10:16.045 register 0x200000200000 2097152 00:10:16.045 malloc 3145728 00:10:16.045 register 0x200000400000 4194304 00:10:16.045 buf 0x2000004fffc0 len 3145728 PASSED 00:10:16.045 malloc 64 00:10:16.045 buf 0x2000004ffec0 len 64 PASSED 00:10:16.045 malloc 4194304 00:10:16.045 register 0x200000800000 6291456 00:10:16.045 buf 0x2000009fffc0 len 4194304 PASSED 00:10:16.045 free 0x2000004fffc0 3145728 00:10:16.045 free 0x2000004ffec0 64 00:10:16.045 unregister 0x200000400000 4194304 PASSED 00:10:16.045 free 0x2000009fffc0 4194304 00:10:16.045 unregister 0x200000800000 6291456 PASSED 00:10:16.045 malloc 8388608 00:10:16.045 register 0x200000400000 10485760 00:10:16.045 buf 0x2000005fffc0 len 8388608 PASSED 00:10:16.045 free 0x2000005fffc0 8388608 00:10:16.045 unregister 0x200000400000 10485760 PASSED 00:10:16.045 passed 00:10:16.045 00:10:16.045 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.045 suites 1 1 n/a 0 0 00:10:16.045 tests 1 1 1 0 0 00:10:16.045 asserts 15 15 15 0 n/a 00:10:16.045 00:10:16.045 Elapsed time = 0.108 seconds 00:10:16.045 00:10:16.045 real 0m0.346s 00:10:16.045 user 0m0.159s 00:10:16.045 sys 0m0.085s 00:10:16.045 13:44:02 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.045 ************************************ 00:10:16.045 END TEST env_mem_callbacks 00:10:16.045 ************************************ 00:10:16.045 13:44:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:16.045 00:10:16.045 real 0m11.076s 00:10:16.045 user 0m9.122s 00:10:16.045 sys 0m1.584s 00:10:16.045 13:44:02 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.045 13:44:02 env -- common/autotest_common.sh@10 -- # set +x 00:10:16.045 ************************************ 00:10:16.045 END TEST env 00:10:16.045 ************************************ 00:10:16.304 13:44:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:16.304 13:44:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:16.304 13:44:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.304 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:10:16.304 ************************************ 00:10:16.304 START TEST rpc 00:10:16.304 ************************************ 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:16.304 * Looking for test storage... 00:10:16.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.304 13:44:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.304 13:44:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.304 13:44:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.304 13:44:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.304 13:44:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.304 13:44:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:16.304 13:44:03 rpc -- scripts/common.sh@345 -- # : 1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.304 13:44:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.304 13:44:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@353 -- # local d=1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.304 13:44:03 rpc -- scripts/common.sh@355 -- # echo 1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.304 13:44:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@353 -- # local d=2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.304 13:44:03 rpc -- scripts/common.sh@355 -- # echo 2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.304 13:44:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.304 13:44:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.304 13:44:03 rpc -- scripts/common.sh@368 -- # return 0 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:16.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.304 --rc genhtml_branch_coverage=1 00:10:16.304 --rc genhtml_function_coverage=1 00:10:16.304 --rc genhtml_legend=1 00:10:16.304 --rc geninfo_all_blocks=1 00:10:16.304 --rc geninfo_unexecuted_blocks=1 00:10:16.304 00:10:16.304 ' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:16.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.304 --rc genhtml_branch_coverage=1 00:10:16.304 --rc genhtml_function_coverage=1 00:10:16.304 --rc genhtml_legend=1 00:10:16.304 --rc geninfo_all_blocks=1 00:10:16.304 --rc geninfo_unexecuted_blocks=1 00:10:16.304 00:10:16.304 ' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:16.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.304 --rc genhtml_branch_coverage=1 00:10:16.304 --rc genhtml_function_coverage=1 00:10:16.304 --rc genhtml_legend=1 00:10:16.304 --rc geninfo_all_blocks=1 00:10:16.304 --rc geninfo_unexecuted_blocks=1 00:10:16.304 00:10:16.304 ' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:16.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.304 --rc genhtml_branch_coverage=1 00:10:16.304 --rc genhtml_function_coverage=1 00:10:16.304 --rc genhtml_legend=1 00:10:16.304 --rc geninfo_all_blocks=1 00:10:16.304 --rc geninfo_unexecuted_blocks=1 00:10:16.304 00:10:16.304 ' 00:10:16.304 13:44:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58407 00:10:16.304 13:44:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:16.304 13:44:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.304 13:44:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58407 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@833 -- # '[' -z 58407 ']' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.304 13:44:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.562 [2024-11-04 13:44:03.363338] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:16.562 [2024-11-04 13:44:03.363521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58407 ] 00:10:16.821 [2024-11-04 13:44:03.555914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.821 [2024-11-04 13:44:03.676760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:16.821 [2024-11-04 13:44:03.676820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58407' to capture a snapshot of events at runtime. 00:10:16.821 [2024-11-04 13:44:03.676836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.821 [2024-11-04 13:44:03.676852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.821 [2024-11-04 13:44:03.676865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58407 for offline analysis/debug. 00:10:16.821 [2024-11-04 13:44:03.678399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.756 13:44:04 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.756 13:44:04 rpc -- common/autotest_common.sh@866 -- # return 0 00:10:17.756 13:44:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:17.756 13:44:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:17.756 13:44:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:17.756 13:44:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:17.756 13:44:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:17.756 13:44:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.756 13:44:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.756 ************************************ 00:10:17.756 START TEST rpc_integrity 00:10:17.756 ************************************ 00:10:17.756 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:17.756 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.756 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.756 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:17.756 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.756 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:17.756 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:18.014 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:18.014 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.014 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:18.014 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.014 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.014 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:18.014 { 00:10:18.014 "name": "Malloc0", 00:10:18.014 "aliases": [ 00:10:18.014 "1c43c496-0abf-4287-acdd-23f38b31fcf9" 00:10:18.014 ], 00:10:18.014 "product_name": "Malloc disk", 00:10:18.014 "block_size": 512, 00:10:18.014 "num_blocks": 16384, 00:10:18.014 "uuid": "1c43c496-0abf-4287-acdd-23f38b31fcf9", 00:10:18.014 "assigned_rate_limits": { 00:10:18.014 "rw_ios_per_sec": 0, 00:10:18.014 "rw_mbytes_per_sec": 0, 00:10:18.014 "r_mbytes_per_sec": 0, 00:10:18.014 "w_mbytes_per_sec": 0 00:10:18.014 }, 00:10:18.014 "claimed": false, 00:10:18.014 "zoned": false, 00:10:18.014 "supported_io_types": { 00:10:18.014 "read": true, 00:10:18.014 "write": true, 00:10:18.014 "unmap": true, 00:10:18.014 "flush": true, 00:10:18.014 "reset": true, 00:10:18.014 "nvme_admin": false, 00:10:18.014 "nvme_io": false, 00:10:18.014 "nvme_io_md": false, 00:10:18.014 "write_zeroes": true, 00:10:18.014 "zcopy": true, 00:10:18.014 "get_zone_info": false, 00:10:18.014 "zone_management": false, 00:10:18.014 "zone_append": false, 00:10:18.014 "compare": false, 00:10:18.014 "compare_and_write": false, 00:10:18.014 "abort": true, 00:10:18.014 "seek_hole": false, 00:10:18.014 "seek_data": false, 00:10:18.014 "copy": true, 00:10:18.014 "nvme_iov_md": false 00:10:18.014 }, 00:10:18.014 "memory_domains": [ 00:10:18.014 { 00:10:18.015 "dma_device_id": "system", 00:10:18.015 "dma_device_type": 1 00:10:18.015 }, 00:10:18.015 { 00:10:18.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.015 "dma_device_type": 2 00:10:18.015 } 00:10:18.015 ], 00:10:18.015 "driver_specific": {} 00:10:18.015 } 00:10:18.015 ]' 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.015 [2024-11-04 13:44:04.830715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:18.015 [2024-11-04 13:44:04.830793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.015 [2024-11-04 13:44:04.830834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:18.015 [2024-11-04 13:44:04.830852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.015 [2024-11-04 13:44:04.833807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.015 [2024-11-04 13:44:04.833861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:18.015 Passthru0 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:18.015 { 00:10:18.015 "name": "Malloc0", 00:10:18.015 "aliases": [ 00:10:18.015 "1c43c496-0abf-4287-acdd-23f38b31fcf9" 00:10:18.015 ], 00:10:18.015 "product_name": "Malloc disk", 00:10:18.015 "block_size": 512, 00:10:18.015 "num_blocks": 16384, 00:10:18.015 "uuid": "1c43c496-0abf-4287-acdd-23f38b31fcf9", 00:10:18.015 "assigned_rate_limits": { 00:10:18.015 "rw_ios_per_sec": 0, 00:10:18.015 "rw_mbytes_per_sec": 0, 00:10:18.015 "r_mbytes_per_sec": 0, 00:10:18.015 "w_mbytes_per_sec": 0 00:10:18.015 }, 00:10:18.015 "claimed": true, 00:10:18.015 "claim_type": "exclusive_write", 00:10:18.015 "zoned": false, 00:10:18.015 "supported_io_types": { 00:10:18.015 "read": true, 00:10:18.015 "write": true, 00:10:18.015 "unmap": true, 00:10:18.015 "flush": true, 00:10:18.015 "reset": true, 00:10:18.015 "nvme_admin": false, 00:10:18.015 "nvme_io": false, 00:10:18.015 "nvme_io_md": false, 00:10:18.015 "write_zeroes": true, 00:10:18.015 "zcopy": true, 00:10:18.015 "get_zone_info": false, 00:10:18.015 "zone_management": false, 00:10:18.015 "zone_append": false, 00:10:18.015 "compare": false, 00:10:18.015 "compare_and_write": false, 00:10:18.015 "abort": true, 00:10:18.015 "seek_hole": false, 00:10:18.015 "seek_data": false, 00:10:18.015 "copy": true, 00:10:18.015 "nvme_iov_md": false 00:10:18.015 }, 00:10:18.015 "memory_domains": [ 00:10:18.015 { 00:10:18.015 "dma_device_id": "system", 00:10:18.015 "dma_device_type": 1 00:10:18.015 }, 00:10:18.015 { 00:10:18.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.015 "dma_device_type": 2 00:10:18.015 } 00:10:18.015 ], 00:10:18.015 "driver_specific": {} 00:10:18.015 }, 00:10:18.015 { 00:10:18.015 "name": "Passthru0", 00:10:18.015 "aliases": [ 00:10:18.015 "282d65d0-acc0-542c-811d-68c33efdc91f" 00:10:18.015 ], 00:10:18.015 "product_name": "passthru", 00:10:18.015 "block_size": 512, 00:10:18.015 "num_blocks": 16384, 00:10:18.015 "uuid": "282d65d0-acc0-542c-811d-68c33efdc91f", 00:10:18.015 "assigned_rate_limits": { 00:10:18.015 "rw_ios_per_sec": 0, 00:10:18.015 "rw_mbytes_per_sec": 0, 00:10:18.015 "r_mbytes_per_sec": 0, 00:10:18.015 "w_mbytes_per_sec": 0 00:10:18.015 }, 00:10:18.015 "claimed": false, 00:10:18.015 "zoned": false, 00:10:18.015 "supported_io_types": { 00:10:18.015 "read": true, 00:10:18.015 "write": true, 00:10:18.015 "unmap": true, 00:10:18.015 "flush": true, 00:10:18.015 "reset": true, 00:10:18.015 "nvme_admin": false, 00:10:18.015 "nvme_io": false, 00:10:18.015 "nvme_io_md": false, 00:10:18.015 "write_zeroes": true, 00:10:18.015 "zcopy": true, 00:10:18.015 "get_zone_info": false, 00:10:18.015 "zone_management": false, 00:10:18.015 "zone_append": false, 00:10:18.015 "compare": false, 00:10:18.015 "compare_and_write": false, 00:10:18.015 "abort": true, 00:10:18.015 "seek_hole": false, 00:10:18.015 "seek_data": false, 00:10:18.015 "copy": true, 00:10:18.015 "nvme_iov_md": false 00:10:18.015 }, 00:10:18.015 "memory_domains": [ 00:10:18.015 { 00:10:18.015 "dma_device_id": "system", 00:10:18.015 "dma_device_type": 1 00:10:18.015 }, 00:10:18.015 { 00:10:18.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.015 "dma_device_type": 2 00:10:18.015 } 00:10:18.015 ], 00:10:18.015 "driver_specific": { 00:10:18.015 "passthru": { 00:10:18.015 "name": "Passthru0", 00:10:18.015 "base_bdev_name": "Malloc0" 00:10:18.015 } 00:10:18.015 } 00:10:18.015 } 00:10:18.015 ]' 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:18.015 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.015 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.273 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.273 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.273 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:18.273 13:44:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:18.273 ************************************ 00:10:18.273 END TEST rpc_integrity 00:10:18.273 ************************************ 00:10:18.273 13:44:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:18.273 00:10:18.273 real 0m0.370s 00:10:18.273 user 0m0.209s 00:10:18.273 sys 0m0.054s 00:10:18.273 13:44:05 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.273 13:44:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:18.273 13:44:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.273 13:44:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.273 13:44:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 ************************************ 00:10:18.273 START TEST rpc_plugins 00:10:18.273 ************************************ 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:18.273 { 00:10:18.273 "name": "Malloc1", 00:10:18.273 "aliases": [ 00:10:18.273 "95e91962-b26f-4dd8-9e4c-a6cb06c3383e" 00:10:18.273 ], 00:10:18.273 "product_name": "Malloc disk", 00:10:18.273 "block_size": 4096, 00:10:18.273 "num_blocks": 256, 00:10:18.273 "uuid": "95e91962-b26f-4dd8-9e4c-a6cb06c3383e", 00:10:18.273 "assigned_rate_limits": { 00:10:18.273 "rw_ios_per_sec": 0, 00:10:18.273 "rw_mbytes_per_sec": 0, 00:10:18.273 "r_mbytes_per_sec": 0, 00:10:18.273 "w_mbytes_per_sec": 0 00:10:18.273 }, 00:10:18.273 "claimed": false, 00:10:18.273 "zoned": false, 00:10:18.273 "supported_io_types": { 00:10:18.273 "read": true, 00:10:18.273 "write": true, 00:10:18.273 "unmap": true, 00:10:18.273 "flush": true, 00:10:18.273 "reset": true, 00:10:18.273 "nvme_admin": false, 00:10:18.273 "nvme_io": false, 00:10:18.273 "nvme_io_md": false, 00:10:18.273 "write_zeroes": true, 00:10:18.273 "zcopy": true, 00:10:18.273 "get_zone_info": false, 00:10:18.273 "zone_management": false, 00:10:18.273 "zone_append": false, 00:10:18.273 "compare": false, 00:10:18.273 "compare_and_write": false, 00:10:18.273 "abort": true, 00:10:18.273 "seek_hole": false, 00:10:18.273 "seek_data": false, 00:10:18.273 "copy": true, 00:10:18.273 "nvme_iov_md": false 00:10:18.273 }, 00:10:18.273 "memory_domains": [ 00:10:18.273 { 00:10:18.273 "dma_device_id": "system", 00:10:18.273 "dma_device_type": 1 00:10:18.273 }, 00:10:18.273 { 00:10:18.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.273 "dma_device_type": 2 00:10:18.273 } 00:10:18.273 ], 00:10:18.273 "driver_specific": {} 00:10:18.273 } 00:10:18.273 ]' 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:18.273 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.273 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:18.274 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:18.532 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.532 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:18.532 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:18.532 ************************************ 00:10:18.532 END TEST rpc_plugins 00:10:18.532 ************************************ 00:10:18.532 13:44:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:18.532 00:10:18.532 real 0m0.163s 00:10:18.532 user 0m0.095s 00:10:18.532 sys 0m0.021s 00:10:18.532 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.532 13:44:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 13:44:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:18.532 13:44:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.532 13:44:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.532 13:44:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 ************************************ 00:10:18.532 START TEST rpc_trace_cmd_test 00:10:18.532 ************************************ 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:18.532 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58407", 00:10:18.532 "tpoint_group_mask": "0x8", 00:10:18.532 "iscsi_conn": { 00:10:18.532 "mask": "0x2", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "scsi": { 00:10:18.532 "mask": "0x4", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "bdev": { 00:10:18.532 "mask": "0x8", 00:10:18.532 "tpoint_mask": "0xffffffffffffffff" 00:10:18.532 }, 00:10:18.532 "nvmf_rdma": { 00:10:18.532 "mask": "0x10", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "nvmf_tcp": { 00:10:18.532 "mask": "0x20", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "ftl": { 00:10:18.532 "mask": "0x40", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "blobfs": { 00:10:18.532 "mask": "0x80", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "dsa": { 00:10:18.532 "mask": "0x200", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "thread": { 00:10:18.532 "mask": "0x400", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "nvme_pcie": { 00:10:18.532 "mask": "0x800", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "iaa": { 00:10:18.532 "mask": "0x1000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "nvme_tcp": { 00:10:18.532 "mask": "0x2000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "bdev_nvme": { 00:10:18.532 "mask": "0x4000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "sock": { 00:10:18.532 "mask": "0x8000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "blob": { 00:10:18.532 "mask": "0x10000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "bdev_raid": { 00:10:18.532 "mask": "0x20000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 }, 00:10:18.532 "scheduler": { 00:10:18.532 "mask": "0x40000", 00:10:18.532 "tpoint_mask": "0x0" 00:10:18.532 } 00:10:18.532 }' 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:18.532 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:18.791 ************************************ 00:10:18.791 END TEST rpc_trace_cmd_test 00:10:18.791 ************************************ 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:18.791 00:10:18.791 real 0m0.264s 00:10:18.791 user 0m0.222s 00:10:18.791 sys 0m0.032s 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.791 13:44:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 13:44:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:18.791 13:44:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:18.791 13:44:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:18.791 13:44:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.791 13:44:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.791 13:44:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 ************************************ 00:10:18.791 START TEST rpc_daemon_integrity 00:10:18.791 ************************************ 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.791 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:19.059 { 00:10:19.059 "name": "Malloc2", 00:10:19.059 "aliases": [ 00:10:19.059 "00582a9f-d973-4626-ad61-910c78e08fd0" 00:10:19.059 ], 00:10:19.059 "product_name": "Malloc disk", 00:10:19.059 "block_size": 512, 00:10:19.059 "num_blocks": 16384, 00:10:19.059 "uuid": "00582a9f-d973-4626-ad61-910c78e08fd0", 00:10:19.059 "assigned_rate_limits": { 00:10:19.059 "rw_ios_per_sec": 0, 00:10:19.059 "rw_mbytes_per_sec": 0, 00:10:19.059 "r_mbytes_per_sec": 0, 00:10:19.059 "w_mbytes_per_sec": 0 00:10:19.059 }, 00:10:19.059 "claimed": false, 00:10:19.059 "zoned": false, 00:10:19.059 "supported_io_types": { 00:10:19.059 "read": true, 00:10:19.059 "write": true, 00:10:19.059 "unmap": true, 00:10:19.059 "flush": true, 00:10:19.059 "reset": true, 00:10:19.059 "nvme_admin": false, 00:10:19.059 "nvme_io": false, 00:10:19.059 "nvme_io_md": false, 00:10:19.059 "write_zeroes": true, 00:10:19.059 "zcopy": true, 00:10:19.059 "get_zone_info": false, 00:10:19.059 "zone_management": false, 00:10:19.059 "zone_append": false, 00:10:19.059 "compare": false, 00:10:19.059 "compare_and_write": false, 00:10:19.059 "abort": true, 00:10:19.059 "seek_hole": false, 00:10:19.059 "seek_data": false, 00:10:19.059 "copy": true, 00:10:19.059 "nvme_iov_md": false 00:10:19.059 }, 00:10:19.059 "memory_domains": [ 00:10:19.059 { 00:10:19.059 "dma_device_id": "system", 00:10:19.059 "dma_device_type": 1 00:10:19.059 }, 00:10:19.059 { 00:10:19.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.059 "dma_device_type": 2 00:10:19.059 } 00:10:19.059 ], 00:10:19.059 "driver_specific": {} 00:10:19.059 } 00:10:19.059 ]' 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.059 [2024-11-04 13:44:05.767241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:19.059 [2024-11-04 13:44:05.767312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.059 [2024-11-04 13:44:05.767342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:19.059 [2024-11-04 13:44:05.767357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.059 [2024-11-04 13:44:05.770324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.059 [2024-11-04 13:44:05.770487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:19.059 Passthru0 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.059 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:19.059 { 00:10:19.059 "name": "Malloc2", 00:10:19.059 "aliases": [ 00:10:19.060 "00582a9f-d973-4626-ad61-910c78e08fd0" 00:10:19.060 ], 00:10:19.060 "product_name": "Malloc disk", 00:10:19.060 "block_size": 512, 00:10:19.060 "num_blocks": 16384, 00:10:19.060 "uuid": "00582a9f-d973-4626-ad61-910c78e08fd0", 00:10:19.060 "assigned_rate_limits": { 00:10:19.060 "rw_ios_per_sec": 0, 00:10:19.060 "rw_mbytes_per_sec": 0, 00:10:19.060 "r_mbytes_per_sec": 0, 00:10:19.060 "w_mbytes_per_sec": 0 00:10:19.060 }, 00:10:19.060 "claimed": true, 00:10:19.060 "claim_type": "exclusive_write", 00:10:19.060 "zoned": false, 00:10:19.060 "supported_io_types": { 00:10:19.060 "read": true, 00:10:19.060 "write": true, 00:10:19.060 "unmap": true, 00:10:19.060 "flush": true, 00:10:19.060 "reset": true, 00:10:19.060 "nvme_admin": false, 00:10:19.060 "nvme_io": false, 00:10:19.060 "nvme_io_md": false, 00:10:19.060 "write_zeroes": true, 00:10:19.060 "zcopy": true, 00:10:19.060 "get_zone_info": false, 00:10:19.060 "zone_management": false, 00:10:19.060 "zone_append": false, 00:10:19.060 "compare": false, 00:10:19.060 "compare_and_write": false, 00:10:19.060 "abort": true, 00:10:19.060 "seek_hole": false, 00:10:19.060 "seek_data": false, 00:10:19.060 "copy": true, 00:10:19.060 "nvme_iov_md": false 00:10:19.060 }, 00:10:19.060 "memory_domains": [ 00:10:19.060 { 00:10:19.060 "dma_device_id": "system", 00:10:19.060 "dma_device_type": 1 00:10:19.060 }, 00:10:19.060 { 00:10:19.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.060 "dma_device_type": 2 00:10:19.060 } 00:10:19.060 ], 00:10:19.060 "driver_specific": {} 00:10:19.060 }, 00:10:19.060 { 00:10:19.060 "name": "Passthru0", 00:10:19.060 "aliases": [ 00:10:19.060 "0ff8cea1-09d6-5a83-aa80-e40fd151a212" 00:10:19.060 ], 00:10:19.060 "product_name": "passthru", 00:10:19.060 "block_size": 512, 00:10:19.060 "num_blocks": 16384, 00:10:19.060 "uuid": "0ff8cea1-09d6-5a83-aa80-e40fd151a212", 00:10:19.060 "assigned_rate_limits": { 00:10:19.060 "rw_ios_per_sec": 0, 00:10:19.060 "rw_mbytes_per_sec": 0, 00:10:19.060 "r_mbytes_per_sec": 0, 00:10:19.060 "w_mbytes_per_sec": 0 00:10:19.060 }, 00:10:19.060 "claimed": false, 00:10:19.060 "zoned": false, 00:10:19.060 "supported_io_types": { 00:10:19.060 "read": true, 00:10:19.060 "write": true, 00:10:19.060 "unmap": true, 00:10:19.060 "flush": true, 00:10:19.060 "reset": true, 00:10:19.060 "nvme_admin": false, 00:10:19.060 "nvme_io": false, 00:10:19.060 "nvme_io_md": false, 00:10:19.060 "write_zeroes": true, 00:10:19.060 "zcopy": true, 00:10:19.060 "get_zone_info": false, 00:10:19.060 "zone_management": false, 00:10:19.060 "zone_append": false, 00:10:19.060 "compare": false, 00:10:19.060 "compare_and_write": false, 00:10:19.060 "abort": true, 00:10:19.060 "seek_hole": false, 00:10:19.060 "seek_data": false, 00:10:19.060 "copy": true, 00:10:19.060 "nvme_iov_md": false 00:10:19.060 }, 00:10:19.060 "memory_domains": [ 00:10:19.060 { 00:10:19.060 "dma_device_id": "system", 00:10:19.060 "dma_device_type": 1 00:10:19.060 }, 00:10:19.060 { 00:10:19.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.060 "dma_device_type": 2 00:10:19.060 } 00:10:19.060 ], 00:10:19.060 "driver_specific": { 00:10:19.060 "passthru": { 00:10:19.060 "name": "Passthru0", 00:10:19.060 "base_bdev_name": "Malloc2" 00:10:19.060 } 00:10:19.060 } 00:10:19.060 } 00:10:19.060 ]' 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:19.060 ************************************ 00:10:19.060 END TEST rpc_daemon_integrity 00:10:19.060 ************************************ 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:19.060 00:10:19.060 real 0m0.333s 00:10:19.060 user 0m0.176s 00:10:19.060 sys 0m0.056s 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.060 13:44:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:19.332 13:44:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:19.332 13:44:06 rpc -- rpc/rpc.sh@84 -- # killprocess 58407 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@952 -- # '[' -z 58407 ']' 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@956 -- # kill -0 58407 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@957 -- # uname 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58407 00:10:19.332 killing process with pid 58407 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58407' 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@971 -- # kill 58407 00:10:19.332 13:44:06 rpc -- common/autotest_common.sh@976 -- # wait 58407 00:10:21.863 00:10:21.863 real 0m5.707s 00:10:21.863 user 0m6.338s 00:10:21.863 sys 0m0.966s 00:10:21.863 13:44:08 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.863 ************************************ 00:10:21.863 END TEST rpc 00:10:21.863 ************************************ 00:10:21.863 13:44:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.863 13:44:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:21.863 13:44:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:21.863 13:44:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.863 13:44:08 -- common/autotest_common.sh@10 -- # set +x 00:10:21.863 ************************************ 00:10:21.863 START TEST skip_rpc 00:10:21.863 ************************************ 00:10:21.863 13:44:08 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:22.122 * Looking for test storage... 00:10:22.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.122 13:44:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.122 --rc genhtml_branch_coverage=1 00:10:22.122 --rc genhtml_function_coverage=1 00:10:22.122 --rc genhtml_legend=1 00:10:22.122 --rc geninfo_all_blocks=1 00:10:22.122 --rc geninfo_unexecuted_blocks=1 00:10:22.122 00:10:22.122 ' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.122 --rc genhtml_branch_coverage=1 00:10:22.122 --rc genhtml_function_coverage=1 00:10:22.122 --rc genhtml_legend=1 00:10:22.122 --rc geninfo_all_blocks=1 00:10:22.122 --rc geninfo_unexecuted_blocks=1 00:10:22.122 00:10:22.122 ' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.122 --rc genhtml_branch_coverage=1 00:10:22.122 --rc genhtml_function_coverage=1 00:10:22.122 --rc genhtml_legend=1 00:10:22.122 --rc geninfo_all_blocks=1 00:10:22.122 --rc geninfo_unexecuted_blocks=1 00:10:22.122 00:10:22.122 ' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.122 --rc genhtml_branch_coverage=1 00:10:22.122 --rc genhtml_function_coverage=1 00:10:22.122 --rc genhtml_legend=1 00:10:22.122 --rc geninfo_all_blocks=1 00:10:22.122 --rc geninfo_unexecuted_blocks=1 00:10:22.122 00:10:22.122 ' 00:10:22.122 13:44:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:22.122 13:44:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:22.122 13:44:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.122 13:44:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.122 ************************************ 00:10:22.122 START TEST skip_rpc 00:10:22.122 ************************************ 00:10:22.122 13:44:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:10:22.122 13:44:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58647 00:10:22.122 13:44:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:22.122 13:44:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:22.122 13:44:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:22.381 [2024-11-04 13:44:09.097619] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:22.381 [2024-11-04 13:44:09.098037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58647 ] 00:10:22.381 [2024-11-04 13:44:09.283840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.639 [2024-11-04 13:44:09.452883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58647 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58647 ']' 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58647 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:10:27.906 13:44:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58647 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.906 killing process with pid 58647 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58647' 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58647 00:10:27.906 13:44:14 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58647 00:10:30.435 ************************************ 00:10:30.435 END TEST skip_rpc 00:10:30.435 ************************************ 00:10:30.435 00:10:30.435 real 0m7.752s 00:10:30.435 user 0m7.222s 00:10:30.435 sys 0m0.431s 00:10:30.435 13:44:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.435 13:44:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.435 13:44:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:30.435 13:44:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:30.435 13:44:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:30.435 13:44:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.435 ************************************ 00:10:30.435 START TEST skip_rpc_with_json 00:10:30.435 ************************************ 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58755 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58755 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58755 ']' 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:30.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:30.435 13:44:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:30.435 [2024-11-04 13:44:16.940795] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:30.435 [2024-11-04 13:44:16.940972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58755 ] 00:10:30.435 [2024-11-04 13:44:17.141169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.435 [2024-11-04 13:44:17.329242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:31.839 [2024-11-04 13:44:18.324426] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:31.839 request: 00:10:31.839 { 00:10:31.839 "trtype": "tcp", 00:10:31.839 "method": "nvmf_get_transports", 00:10:31.839 "req_id": 1 00:10:31.839 } 00:10:31.839 Got JSON-RPC error response 00:10:31.839 response: 00:10:31.839 { 00:10:31.839 "code": -19, 00:10:31.839 "message": "No such device" 00:10:31.839 } 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.839 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:31.839 [2024-11-04 13:44:18.336560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.840 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:31.840 { 00:10:31.840 "subsystems": [ 00:10:31.840 { 00:10:31.840 "subsystem": "fsdev", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "fsdev_set_opts", 00:10:31.840 "params": { 00:10:31.840 "fsdev_io_pool_size": 65535, 00:10:31.840 "fsdev_io_cache_size": 256 00:10:31.840 } 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "keyring", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "iobuf", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "iobuf_set_options", 00:10:31.840 "params": { 00:10:31.840 "small_pool_count": 8192, 00:10:31.840 "large_pool_count": 1024, 00:10:31.840 "small_bufsize": 8192, 00:10:31.840 "large_bufsize": 135168, 00:10:31.840 "enable_numa": false 00:10:31.840 } 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "sock", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "sock_set_default_impl", 00:10:31.840 "params": { 00:10:31.840 "impl_name": "posix" 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "sock_impl_set_options", 00:10:31.840 "params": { 00:10:31.840 "impl_name": "ssl", 00:10:31.840 "recv_buf_size": 4096, 00:10:31.840 "send_buf_size": 4096, 00:10:31.840 "enable_recv_pipe": true, 00:10:31.840 "enable_quickack": false, 00:10:31.840 "enable_placement_id": 0, 00:10:31.840 "enable_zerocopy_send_server": true, 00:10:31.840 "enable_zerocopy_send_client": false, 00:10:31.840 "zerocopy_threshold": 0, 00:10:31.840 "tls_version": 0, 00:10:31.840 "enable_ktls": false 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "sock_impl_set_options", 00:10:31.840 "params": { 00:10:31.840 "impl_name": "posix", 00:10:31.840 "recv_buf_size": 2097152, 00:10:31.840 "send_buf_size": 2097152, 00:10:31.840 "enable_recv_pipe": true, 00:10:31.840 "enable_quickack": false, 00:10:31.840 "enable_placement_id": 0, 00:10:31.840 "enable_zerocopy_send_server": true, 00:10:31.840 "enable_zerocopy_send_client": false, 00:10:31.840 "zerocopy_threshold": 0, 00:10:31.840 "tls_version": 0, 00:10:31.840 "enable_ktls": false 00:10:31.840 } 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "vmd", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "accel", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "accel_set_options", 00:10:31.840 "params": { 00:10:31.840 "small_cache_size": 128, 00:10:31.840 "large_cache_size": 16, 00:10:31.840 "task_count": 2048, 00:10:31.840 "sequence_count": 2048, 00:10:31.840 "buf_count": 2048 00:10:31.840 } 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "bdev", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "bdev_set_options", 00:10:31.840 "params": { 00:10:31.840 "bdev_io_pool_size": 65535, 00:10:31.840 "bdev_io_cache_size": 256, 00:10:31.840 "bdev_auto_examine": true, 00:10:31.840 "iobuf_small_cache_size": 128, 00:10:31.840 "iobuf_large_cache_size": 16 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "bdev_raid_set_options", 00:10:31.840 "params": { 00:10:31.840 "process_window_size_kb": 1024, 00:10:31.840 "process_max_bandwidth_mb_sec": 0 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "bdev_iscsi_set_options", 00:10:31.840 "params": { 00:10:31.840 "timeout_sec": 30 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "bdev_nvme_set_options", 00:10:31.840 "params": { 00:10:31.840 "action_on_timeout": "none", 00:10:31.840 "timeout_us": 0, 00:10:31.840 "timeout_admin_us": 0, 00:10:31.840 "keep_alive_timeout_ms": 10000, 00:10:31.840 "arbitration_burst": 0, 00:10:31.840 "low_priority_weight": 0, 00:10:31.840 "medium_priority_weight": 0, 00:10:31.840 "high_priority_weight": 0, 00:10:31.840 "nvme_adminq_poll_period_us": 10000, 00:10:31.840 "nvme_ioq_poll_period_us": 0, 00:10:31.840 "io_queue_requests": 0, 00:10:31.840 "delay_cmd_submit": true, 00:10:31.840 "transport_retry_count": 4, 00:10:31.840 "bdev_retry_count": 3, 00:10:31.840 "transport_ack_timeout": 0, 00:10:31.840 "ctrlr_loss_timeout_sec": 0, 00:10:31.840 "reconnect_delay_sec": 0, 00:10:31.840 "fast_io_fail_timeout_sec": 0, 00:10:31.840 "disable_auto_failback": false, 00:10:31.840 "generate_uuids": false, 00:10:31.840 "transport_tos": 0, 00:10:31.840 "nvme_error_stat": false, 00:10:31.840 "rdma_srq_size": 0, 00:10:31.840 "io_path_stat": false, 00:10:31.840 "allow_accel_sequence": false, 00:10:31.840 "rdma_max_cq_size": 0, 00:10:31.840 "rdma_cm_event_timeout_ms": 0, 00:10:31.840 "dhchap_digests": [ 00:10:31.840 "sha256", 00:10:31.840 "sha384", 00:10:31.840 "sha512" 00:10:31.840 ], 00:10:31.840 "dhchap_dhgroups": [ 00:10:31.840 "null", 00:10:31.840 "ffdhe2048", 00:10:31.840 "ffdhe3072", 00:10:31.840 "ffdhe4096", 00:10:31.840 "ffdhe6144", 00:10:31.840 "ffdhe8192" 00:10:31.840 ] 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "bdev_nvme_set_hotplug", 00:10:31.840 "params": { 00:10:31.840 "period_us": 100000, 00:10:31.840 "enable": false 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "bdev_wait_for_examine" 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "scsi", 00:10:31.840 "config": null 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "scheduler", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "framework_set_scheduler", 00:10:31.840 "params": { 00:10:31.840 "name": "static" 00:10:31.840 } 00:10:31.840 } 00:10:31.840 ] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "vhost_scsi", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "vhost_blk", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "ublk", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "nbd", 00:10:31.840 "config": [] 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "subsystem": "nvmf", 00:10:31.840 "config": [ 00:10:31.840 { 00:10:31.840 "method": "nvmf_set_config", 00:10:31.840 "params": { 00:10:31.840 "discovery_filter": "match_any", 00:10:31.840 "admin_cmd_passthru": { 00:10:31.840 "identify_ctrlr": false 00:10:31.840 }, 00:10:31.840 "dhchap_digests": [ 00:10:31.840 "sha256", 00:10:31.840 "sha384", 00:10:31.840 "sha512" 00:10:31.840 ], 00:10:31.840 "dhchap_dhgroups": [ 00:10:31.840 "null", 00:10:31.840 "ffdhe2048", 00:10:31.840 "ffdhe3072", 00:10:31.840 "ffdhe4096", 00:10:31.840 "ffdhe6144", 00:10:31.840 "ffdhe8192" 00:10:31.840 ] 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "nvmf_set_max_subsystems", 00:10:31.840 "params": { 00:10:31.840 "max_subsystems": 1024 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "nvmf_set_crdt", 00:10:31.840 "params": { 00:10:31.840 "crdt1": 0, 00:10:31.840 "crdt2": 0, 00:10:31.840 "crdt3": 0 00:10:31.840 } 00:10:31.840 }, 00:10:31.840 { 00:10:31.840 "method": "nvmf_create_transport", 00:10:31.840 "params": { 00:10:31.840 "trtype": "TCP", 00:10:31.840 "max_queue_depth": 128, 00:10:31.840 "max_io_qpairs_per_ctrlr": 127, 00:10:31.840 "in_capsule_data_size": 4096, 00:10:31.840 "max_io_size": 131072, 00:10:31.840 "io_unit_size": 131072, 00:10:31.840 "max_aq_depth": 128, 00:10:31.840 "num_shared_buffers": 511, 00:10:31.840 "buf_cache_size": 4294967295, 00:10:31.841 "dif_insert_or_strip": false, 00:10:31.841 "zcopy": false, 00:10:31.841 "c2h_success": true, 00:10:31.841 "sock_priority": 0, 00:10:31.841 "abort_timeout_sec": 1, 00:10:31.841 "ack_timeout": 0, 00:10:31.841 "data_wr_pool_size": 0 00:10:31.841 } 00:10:31.841 } 00:10:31.841 ] 00:10:31.841 }, 00:10:31.841 { 00:10:31.841 "subsystem": "iscsi", 00:10:31.841 "config": [ 00:10:31.841 { 00:10:31.841 "method": "iscsi_set_options", 00:10:31.841 "params": { 00:10:31.841 "node_base": "iqn.2016-06.io.spdk", 00:10:31.841 "max_sessions": 128, 00:10:31.841 "max_connections_per_session": 2, 00:10:31.841 "max_queue_depth": 64, 00:10:31.841 "default_time2wait": 2, 00:10:31.841 "default_time2retain": 20, 00:10:31.841 "first_burst_length": 8192, 00:10:31.841 "immediate_data": true, 00:10:31.841 "allow_duplicated_isid": false, 00:10:31.841 "error_recovery_level": 0, 00:10:31.841 "nop_timeout": 60, 00:10:31.841 "nop_in_interval": 30, 00:10:31.841 "disable_chap": false, 00:10:31.841 "require_chap": false, 00:10:31.841 "mutual_chap": false, 00:10:31.841 "chap_group": 0, 00:10:31.841 "max_large_datain_per_connection": 64, 00:10:31.841 "max_r2t_per_connection": 4, 00:10:31.841 "pdu_pool_size": 36864, 00:10:31.841 "immediate_data_pool_size": 16384, 00:10:31.841 "data_out_pool_size": 2048 00:10:31.841 } 00:10:31.841 } 00:10:31.841 ] 00:10:31.841 } 00:10:31.841 ] 00:10:31.841 } 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58755 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58755 ']' 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58755 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58755 00:10:31.841 killing process with pid 58755 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58755' 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58755 00:10:31.841 13:44:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58755 00:10:35.123 13:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58818 00:10:35.123 13:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:35.123 13:44:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58818 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58818 ']' 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58818 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58818 00:10:40.388 killing process with pid 58818 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58818' 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58818 00:10:40.388 13:44:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58818 00:10:42.288 13:44:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:42.288 13:44:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:42.288 00:10:42.288 real 0m12.207s 00:10:42.288 user 0m11.658s 00:10:42.288 sys 0m1.028s 00:10:42.288 13:44:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.288 13:44:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:42.288 ************************************ 00:10:42.288 END TEST skip_rpc_with_json 00:10:42.288 ************************************ 00:10:42.288 13:44:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:42.288 13:44:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.288 13:44:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.288 13:44:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.288 ************************************ 00:10:42.288 START TEST skip_rpc_with_delay 00:10:42.288 ************************************ 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:42.288 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:42.545 [2024-11-04 13:44:29.219810] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.545 00:10:42.545 real 0m0.228s 00:10:42.545 user 0m0.111s 00:10:42.545 sys 0m0.114s 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.545 ************************************ 00:10:42.545 END TEST skip_rpc_with_delay 00:10:42.545 13:44:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:42.545 ************************************ 00:10:42.545 13:44:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:42.545 13:44:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:42.545 13:44:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:42.545 13:44:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:42.545 13:44:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.545 13:44:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.545 ************************************ 00:10:42.545 START TEST exit_on_failed_rpc_init 00:10:42.545 ************************************ 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:10:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58959 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58959 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58959 ']' 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.545 13:44:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:42.802 [2024-11-04 13:44:29.500091] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:42.802 [2024-11-04 13:44:29.500296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 00:10:42.802 [2024-11-04 13:44:29.705513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.060 [2024-11-04 13:44:29.915239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.992 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.992 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:10:43.992 13:44:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:43.993 13:44:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:44.250 [2024-11-04 13:44:30.990983] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:44.250 [2024-11-04 13:44:30.991175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:10:44.508 [2024-11-04 13:44:31.195273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.508 [2024-11-04 13:44:31.370255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.508 [2024-11-04 13:44:31.370410] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:44.508 [2024-11-04 13:44:31.370440] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:44.508 [2024-11-04 13:44:31.370478] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58959 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58959 ']' 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58959 00:10:44.766 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58959 00:10:45.025 killing process with pid 58959 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58959' 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58959 00:10:45.025 13:44:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58959 00:10:47.555 00:10:47.555 real 0m4.953s 00:10:47.555 user 0m5.424s 00:10:47.555 sys 0m0.723s 00:10:47.555 13:44:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.555 ************************************ 00:10:47.555 13:44:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:47.555 END TEST exit_on_failed_rpc_init 00:10:47.555 ************************************ 00:10:47.555 13:44:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:47.555 ************************************ 00:10:47.555 END TEST skip_rpc 00:10:47.555 ************************************ 00:10:47.555 00:10:47.555 real 0m25.582s 00:10:47.555 user 0m24.616s 00:10:47.555 sys 0m2.546s 00:10:47.555 13:44:34 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.555 13:44:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.555 13:44:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:47.555 13:44:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:47.555 13:44:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.555 13:44:34 -- common/autotest_common.sh@10 -- # set +x 00:10:47.555 ************************************ 00:10:47.555 START TEST rpc_client 00:10:47.555 ************************************ 00:10:47.555 13:44:34 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:47.813 * Looking for test storage... 00:10:47.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.813 13:44:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:47.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.813 --rc genhtml_branch_coverage=1 00:10:47.813 --rc genhtml_function_coverage=1 00:10:47.813 --rc genhtml_legend=1 00:10:47.813 --rc geninfo_all_blocks=1 00:10:47.813 --rc geninfo_unexecuted_blocks=1 00:10:47.813 00:10:47.813 ' 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:47.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.813 --rc genhtml_branch_coverage=1 00:10:47.813 --rc genhtml_function_coverage=1 00:10:47.813 --rc genhtml_legend=1 00:10:47.813 --rc geninfo_all_blocks=1 00:10:47.813 --rc geninfo_unexecuted_blocks=1 00:10:47.813 00:10:47.813 ' 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:47.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.813 --rc genhtml_branch_coverage=1 00:10:47.813 --rc genhtml_function_coverage=1 00:10:47.813 --rc genhtml_legend=1 00:10:47.813 --rc geninfo_all_blocks=1 00:10:47.813 --rc geninfo_unexecuted_blocks=1 00:10:47.813 00:10:47.813 ' 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:47.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.813 --rc genhtml_branch_coverage=1 00:10:47.813 --rc genhtml_function_coverage=1 00:10:47.813 --rc genhtml_legend=1 00:10:47.813 --rc geninfo_all_blocks=1 00:10:47.813 --rc geninfo_unexecuted_blocks=1 00:10:47.813 00:10:47.813 ' 00:10:47.813 13:44:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:47.813 OK 00:10:47.813 13:44:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:47.813 00:10:47.813 real 0m0.285s 00:10:47.813 user 0m0.148s 00:10:47.813 sys 0m0.142s 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.813 13:44:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:47.813 ************************************ 00:10:47.813 END TEST rpc_client 00:10:47.813 ************************************ 00:10:48.072 13:44:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:48.072 13:44:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:48.072 13:44:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.072 13:44:34 -- common/autotest_common.sh@10 -- # set +x 00:10:48.072 ************************************ 00:10:48.072 START TEST json_config 00:10:48.072 ************************************ 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.072 13:44:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.072 13:44:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.072 13:44:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.072 13:44:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.072 13:44:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.072 13:44:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:48.072 13:44:34 json_config -- scripts/common.sh@345 -- # : 1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.072 13:44:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.072 13:44:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@353 -- # local d=1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.072 13:44:34 json_config -- scripts/common.sh@355 -- # echo 1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.072 13:44:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@353 -- # local d=2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.072 13:44:34 json_config -- scripts/common.sh@355 -- # echo 2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.072 13:44:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.072 13:44:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.072 13:44:34 json_config -- scripts/common.sh@368 -- # return 0 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:48.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.072 --rc genhtml_branch_coverage=1 00:10:48.072 --rc genhtml_function_coverage=1 00:10:48.072 --rc genhtml_legend=1 00:10:48.072 --rc geninfo_all_blocks=1 00:10:48.072 --rc geninfo_unexecuted_blocks=1 00:10:48.072 00:10:48.072 ' 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:48.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.072 --rc genhtml_branch_coverage=1 00:10:48.072 --rc genhtml_function_coverage=1 00:10:48.072 --rc genhtml_legend=1 00:10:48.072 --rc geninfo_all_blocks=1 00:10:48.072 --rc geninfo_unexecuted_blocks=1 00:10:48.072 00:10:48.072 ' 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:48.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.072 --rc genhtml_branch_coverage=1 00:10:48.072 --rc genhtml_function_coverage=1 00:10:48.072 --rc genhtml_legend=1 00:10:48.072 --rc geninfo_all_blocks=1 00:10:48.072 --rc geninfo_unexecuted_blocks=1 00:10:48.072 00:10:48.072 ' 00:10:48.072 13:44:34 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:48.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.072 --rc genhtml_branch_coverage=1 00:10:48.072 --rc genhtml_function_coverage=1 00:10:48.072 --rc genhtml_legend=1 00:10:48.072 --rc geninfo_all_blocks=1 00:10:48.072 --rc geninfo_unexecuted_blocks=1 00:10:48.072 00:10:48.072 ' 00:10:48.072 13:44:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:10:48.072 13:44:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.073 13:44:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.073 13:44:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.073 13:44:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.073 13:44:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.073 13:44:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.073 13:44:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.073 13:44:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.073 13:44:34 json_config -- paths/export.sh@5 -- # export PATH 00:10:48.073 13:44:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@51 -- # : 0 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.073 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.073 13:44:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:48.073 WARNING: No tests are enabled so not running JSON configuration tests 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:48.073 13:44:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:48.073 00:10:48.073 real 0m0.190s 00:10:48.073 user 0m0.101s 00:10:48.073 sys 0m0.091s 00:10:48.073 13:44:34 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.073 13:44:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.073 ************************************ 00:10:48.073 END TEST json_config 00:10:48.073 ************************************ 00:10:48.332 13:44:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:48.332 13:44:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:48.332 13:44:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.332 13:44:35 -- common/autotest_common.sh@10 -- # set +x 00:10:48.332 ************************************ 00:10:48.332 START TEST json_config_extra_key 00:10:48.332 ************************************ 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.332 13:44:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:48.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.332 --rc genhtml_branch_coverage=1 00:10:48.332 --rc genhtml_function_coverage=1 00:10:48.332 --rc genhtml_legend=1 00:10:48.332 --rc geninfo_all_blocks=1 00:10:48.332 --rc geninfo_unexecuted_blocks=1 00:10:48.332 00:10:48.332 ' 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:48.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.332 --rc genhtml_branch_coverage=1 00:10:48.332 --rc genhtml_function_coverage=1 00:10:48.332 --rc genhtml_legend=1 00:10:48.332 --rc geninfo_all_blocks=1 00:10:48.332 --rc geninfo_unexecuted_blocks=1 00:10:48.332 00:10:48.332 ' 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:48.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.332 --rc genhtml_branch_coverage=1 00:10:48.332 --rc genhtml_function_coverage=1 00:10:48.332 --rc genhtml_legend=1 00:10:48.332 --rc geninfo_all_blocks=1 00:10:48.332 --rc geninfo_unexecuted_blocks=1 00:10:48.332 00:10:48.332 ' 00:10:48.332 13:44:35 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:48.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.332 --rc genhtml_branch_coverage=1 00:10:48.332 --rc genhtml_function_coverage=1 00:10:48.332 --rc genhtml_legend=1 00:10:48.332 --rc geninfo_all_blocks=1 00:10:48.332 --rc geninfo_unexecuted_blocks=1 00:10:48.332 00:10:48.332 ' 00:10:48.332 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:48.332 13:44:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:48.332 13:44:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7bfb4f1d-1d86-4c5d-ad5c-cb927cc7889e 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.333 13:44:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.333 13:44:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.333 13:44:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.333 13:44:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.333 13:44:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.333 13:44:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.333 13:44:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.333 13:44:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:48.333 13:44:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.333 13:44:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:48.333 INFO: launching applications... 00:10:48.333 13:44:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59187 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:48.333 Waiting for target to run... 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59187 /var/tmp/spdk_tgt.sock 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 59187 ']' 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.333 13:44:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:48.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.333 13:44:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:48.590 [2024-11-04 13:44:35.381410] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:48.590 [2024-11-04 13:44:35.381945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59187 ] 00:10:49.155 [2024-11-04 13:44:35.816001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.155 [2024-11-04 13:44:35.981131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.088 00:10:50.088 INFO: shutting down applications... 00:10:50.088 13:44:36 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:50.088 13:44:36 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:50.088 13:44:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:50.088 13:44:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59187 ]] 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59187 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:50.088 13:44:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:50.653 13:44:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:50.653 13:44:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:50.653 13:44:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:50.653 13:44:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:51.218 13:44:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:51.218 13:44:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:51.218 13:44:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:51.218 13:44:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:51.783 13:44:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:51.783 13:44:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:51.783 13:44:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:51.783 13:44:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:52.040 13:44:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:52.040 13:44:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:52.040 13:44:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:52.040 13:44:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:52.605 13:44:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:52.605 13:44:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:52.605 13:44:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:52.605 13:44:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:53.171 13:44:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:53.171 13:44:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:53.171 13:44:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:53.171 13:44:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59187 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:53.736 SPDK target shutdown done 00:10:53.736 Success 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:53.736 13:44:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:53.736 13:44:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:53.736 ************************************ 00:10:53.736 END TEST json_config_extra_key 00:10:53.736 ************************************ 00:10:53.736 00:10:53.736 real 0m5.438s 00:10:53.736 user 0m5.020s 00:10:53.736 sys 0m0.682s 00:10:53.736 13:44:40 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.736 13:44:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:53.736 13:44:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:53.736 13:44:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.736 13:44:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.736 13:44:40 -- common/autotest_common.sh@10 -- # set +x 00:10:53.736 ************************************ 00:10:53.736 START TEST alias_rpc 00:10:53.736 ************************************ 00:10:53.736 13:44:40 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:53.736 * Looking for test storage... 00:10:53.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:53.736 13:44:40 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.736 13:44:40 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.736 13:44:40 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.995 13:44:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.995 --rc genhtml_branch_coverage=1 00:10:53.995 --rc genhtml_function_coverage=1 00:10:53.995 --rc genhtml_legend=1 00:10:53.995 --rc geninfo_all_blocks=1 00:10:53.995 --rc geninfo_unexecuted_blocks=1 00:10:53.995 00:10:53.995 ' 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.995 --rc genhtml_branch_coverage=1 00:10:53.995 --rc genhtml_function_coverage=1 00:10:53.995 --rc genhtml_legend=1 00:10:53.995 --rc geninfo_all_blocks=1 00:10:53.995 --rc geninfo_unexecuted_blocks=1 00:10:53.995 00:10:53.995 ' 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.995 --rc genhtml_branch_coverage=1 00:10:53.995 --rc genhtml_function_coverage=1 00:10:53.995 --rc genhtml_legend=1 00:10:53.995 --rc geninfo_all_blocks=1 00:10:53.995 --rc geninfo_unexecuted_blocks=1 00:10:53.995 00:10:53.995 ' 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.995 --rc genhtml_branch_coverage=1 00:10:53.995 --rc genhtml_function_coverage=1 00:10:53.995 --rc genhtml_legend=1 00:10:53.995 --rc geninfo_all_blocks=1 00:10:53.995 --rc geninfo_unexecuted_blocks=1 00:10:53.995 00:10:53.995 ' 00:10:53.995 13:44:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:53.995 13:44:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59316 00:10:53.995 13:44:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:53.995 13:44:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59316 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 59316 ']' 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.995 13:44:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 [2024-11-04 13:44:40.891697] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:53.995 [2024-11-04 13:44:40.892189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:10:54.255 [2024-11-04 13:44:41.101946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.513 [2024-11-04 13:44:41.287840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.446 13:44:42 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.446 13:44:42 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:55.446 13:44:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:56.011 13:44:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59316 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 59316 ']' 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 59316 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59316 00:10:56.011 killing process with pid 59316 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59316' 00:10:56.011 13:44:42 alias_rpc -- common/autotest_common.sh@971 -- # kill 59316 00:10:56.012 13:44:42 alias_rpc -- common/autotest_common.sh@976 -- # wait 59316 00:10:59.293 ************************************ 00:10:59.293 END TEST alias_rpc 00:10:59.293 ************************************ 00:10:59.293 00:10:59.293 real 0m5.028s 00:10:59.293 user 0m5.162s 00:10:59.293 sys 0m0.693s 00:10:59.293 13:44:45 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.293 13:44:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.293 13:44:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:59.293 13:44:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:59.293 13:44:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.293 13:44:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.293 13:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:59.293 ************************************ 00:10:59.293 START TEST spdkcli_tcp 00:10:59.293 ************************************ 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:59.293 * Looking for test storage... 00:10:59.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.293 13:44:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.293 --rc genhtml_branch_coverage=1 00:10:59.293 --rc genhtml_function_coverage=1 00:10:59.293 --rc genhtml_legend=1 00:10:59.293 --rc geninfo_all_blocks=1 00:10:59.293 --rc geninfo_unexecuted_blocks=1 00:10:59.293 00:10:59.293 ' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.293 --rc genhtml_branch_coverage=1 00:10:59.293 --rc genhtml_function_coverage=1 00:10:59.293 --rc genhtml_legend=1 00:10:59.293 --rc geninfo_all_blocks=1 00:10:59.293 --rc geninfo_unexecuted_blocks=1 00:10:59.293 00:10:59.293 ' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.293 --rc genhtml_branch_coverage=1 00:10:59.293 --rc genhtml_function_coverage=1 00:10:59.293 --rc genhtml_legend=1 00:10:59.293 --rc geninfo_all_blocks=1 00:10:59.293 --rc geninfo_unexecuted_blocks=1 00:10:59.293 00:10:59.293 ' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.293 --rc genhtml_branch_coverage=1 00:10:59.293 --rc genhtml_function_coverage=1 00:10:59.293 --rc genhtml_legend=1 00:10:59.293 --rc geninfo_all_blocks=1 00:10:59.293 --rc geninfo_unexecuted_blocks=1 00:10:59.293 00:10:59.293 ' 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59432 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59432 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 59432 ']' 00:10:59.293 13:44:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.293 13:44:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.293 [2024-11-04 13:44:45.906209] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:10:59.293 [2024-11-04 13:44:45.906546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:10:59.293 [2024-11-04 13:44:46.085235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:59.293 [2024-11-04 13:44:46.208891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.293 [2024-11-04 13:44:46.208913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.668 13:44:47 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.668 13:44:47 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:11:00.668 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59454 00:11:00.668 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:00.668 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:00.668 [ 00:11:00.668 "bdev_malloc_delete", 00:11:00.668 "bdev_malloc_create", 00:11:00.668 "bdev_null_resize", 00:11:00.668 "bdev_null_delete", 00:11:00.668 "bdev_null_create", 00:11:00.668 "bdev_nvme_cuse_unregister", 00:11:00.668 "bdev_nvme_cuse_register", 00:11:00.668 "bdev_opal_new_user", 00:11:00.668 "bdev_opal_set_lock_state", 00:11:00.668 "bdev_opal_delete", 00:11:00.668 "bdev_opal_get_info", 00:11:00.668 "bdev_opal_create", 00:11:00.668 "bdev_nvme_opal_revert", 00:11:00.668 "bdev_nvme_opal_init", 00:11:00.668 "bdev_nvme_send_cmd", 00:11:00.668 "bdev_nvme_set_keys", 00:11:00.668 "bdev_nvme_get_path_iostat", 00:11:00.668 "bdev_nvme_get_mdns_discovery_info", 00:11:00.668 "bdev_nvme_stop_mdns_discovery", 00:11:00.668 "bdev_nvme_start_mdns_discovery", 00:11:00.668 "bdev_nvme_set_multipath_policy", 00:11:00.668 "bdev_nvme_set_preferred_path", 00:11:00.668 "bdev_nvme_get_io_paths", 00:11:00.668 "bdev_nvme_remove_error_injection", 00:11:00.668 "bdev_nvme_add_error_injection", 00:11:00.668 "bdev_nvme_get_discovery_info", 00:11:00.668 "bdev_nvme_stop_discovery", 00:11:00.668 "bdev_nvme_start_discovery", 00:11:00.668 "bdev_nvme_get_controller_health_info", 00:11:00.668 "bdev_nvme_disable_controller", 00:11:00.668 "bdev_nvme_enable_controller", 00:11:00.668 "bdev_nvme_reset_controller", 00:11:00.668 "bdev_nvme_get_transport_statistics", 00:11:00.668 "bdev_nvme_apply_firmware", 00:11:00.668 "bdev_nvme_detach_controller", 00:11:00.668 "bdev_nvme_get_controllers", 00:11:00.668 "bdev_nvme_attach_controller", 00:11:00.668 "bdev_nvme_set_hotplug", 00:11:00.668 "bdev_nvme_set_options", 00:11:00.668 "bdev_passthru_delete", 00:11:00.668 "bdev_passthru_create", 00:11:00.668 "bdev_lvol_set_parent_bdev", 00:11:00.668 "bdev_lvol_set_parent", 00:11:00.668 "bdev_lvol_check_shallow_copy", 00:11:00.668 "bdev_lvol_start_shallow_copy", 00:11:00.668 "bdev_lvol_grow_lvstore", 00:11:00.668 "bdev_lvol_get_lvols", 00:11:00.668 "bdev_lvol_get_lvstores", 00:11:00.668 "bdev_lvol_delete", 00:11:00.668 "bdev_lvol_set_read_only", 00:11:00.668 "bdev_lvol_resize", 00:11:00.668 "bdev_lvol_decouple_parent", 00:11:00.668 "bdev_lvol_inflate", 00:11:00.668 "bdev_lvol_rename", 00:11:00.668 "bdev_lvol_clone_bdev", 00:11:00.668 "bdev_lvol_clone", 00:11:00.668 "bdev_lvol_snapshot", 00:11:00.668 "bdev_lvol_create", 00:11:00.668 "bdev_lvol_delete_lvstore", 00:11:00.668 "bdev_lvol_rename_lvstore", 00:11:00.668 "bdev_lvol_create_lvstore", 00:11:00.668 "bdev_raid_set_options", 00:11:00.668 "bdev_raid_remove_base_bdev", 00:11:00.668 "bdev_raid_add_base_bdev", 00:11:00.668 "bdev_raid_delete", 00:11:00.668 "bdev_raid_create", 00:11:00.668 "bdev_raid_get_bdevs", 00:11:00.668 "bdev_error_inject_error", 00:11:00.668 "bdev_error_delete", 00:11:00.668 "bdev_error_create", 00:11:00.668 "bdev_split_delete", 00:11:00.668 "bdev_split_create", 00:11:00.668 "bdev_delay_delete", 00:11:00.668 "bdev_delay_create", 00:11:00.668 "bdev_delay_update_latency", 00:11:00.668 "bdev_zone_block_delete", 00:11:00.668 "bdev_zone_block_create", 00:11:00.668 "blobfs_create", 00:11:00.668 "blobfs_detect", 00:11:00.668 "blobfs_set_cache_size", 00:11:00.668 "bdev_xnvme_delete", 00:11:00.668 "bdev_xnvme_create", 00:11:00.668 "bdev_aio_delete", 00:11:00.668 "bdev_aio_rescan", 00:11:00.668 "bdev_aio_create", 00:11:00.668 "bdev_ftl_set_property", 00:11:00.668 "bdev_ftl_get_properties", 00:11:00.668 "bdev_ftl_get_stats", 00:11:00.668 "bdev_ftl_unmap", 00:11:00.668 "bdev_ftl_unload", 00:11:00.668 "bdev_ftl_delete", 00:11:00.668 "bdev_ftl_load", 00:11:00.668 "bdev_ftl_create", 00:11:00.668 "bdev_virtio_attach_controller", 00:11:00.668 "bdev_virtio_scsi_get_devices", 00:11:00.668 "bdev_virtio_detach_controller", 00:11:00.668 "bdev_virtio_blk_set_hotplug", 00:11:00.668 "bdev_iscsi_delete", 00:11:00.668 "bdev_iscsi_create", 00:11:00.668 "bdev_iscsi_set_options", 00:11:00.668 "accel_error_inject_error", 00:11:00.668 "ioat_scan_accel_module", 00:11:00.668 "dsa_scan_accel_module", 00:11:00.668 "iaa_scan_accel_module", 00:11:00.668 "keyring_file_remove_key", 00:11:00.668 "keyring_file_add_key", 00:11:00.668 "keyring_linux_set_options", 00:11:00.668 "fsdev_aio_delete", 00:11:00.668 "fsdev_aio_create", 00:11:00.668 "iscsi_get_histogram", 00:11:00.668 "iscsi_enable_histogram", 00:11:00.668 "iscsi_set_options", 00:11:00.668 "iscsi_get_auth_groups", 00:11:00.668 "iscsi_auth_group_remove_secret", 00:11:00.668 "iscsi_auth_group_add_secret", 00:11:00.668 "iscsi_delete_auth_group", 00:11:00.668 "iscsi_create_auth_group", 00:11:00.668 "iscsi_set_discovery_auth", 00:11:00.668 "iscsi_get_options", 00:11:00.668 "iscsi_target_node_request_logout", 00:11:00.668 "iscsi_target_node_set_redirect", 00:11:00.669 "iscsi_target_node_set_auth", 00:11:00.669 "iscsi_target_node_add_lun", 00:11:00.669 "iscsi_get_stats", 00:11:00.669 "iscsi_get_connections", 00:11:00.669 "iscsi_portal_group_set_auth", 00:11:00.669 "iscsi_start_portal_group", 00:11:00.669 "iscsi_delete_portal_group", 00:11:00.669 "iscsi_create_portal_group", 00:11:00.669 "iscsi_get_portal_groups", 00:11:00.669 "iscsi_delete_target_node", 00:11:00.669 "iscsi_target_node_remove_pg_ig_maps", 00:11:00.669 "iscsi_target_node_add_pg_ig_maps", 00:11:00.669 "iscsi_create_target_node", 00:11:00.669 "iscsi_get_target_nodes", 00:11:00.669 "iscsi_delete_initiator_group", 00:11:00.669 "iscsi_initiator_group_remove_initiators", 00:11:00.669 "iscsi_initiator_group_add_initiators", 00:11:00.669 "iscsi_create_initiator_group", 00:11:00.669 "iscsi_get_initiator_groups", 00:11:00.669 "nvmf_set_crdt", 00:11:00.669 "nvmf_set_config", 00:11:00.669 "nvmf_set_max_subsystems", 00:11:00.669 "nvmf_stop_mdns_prr", 00:11:00.669 "nvmf_publish_mdns_prr", 00:11:00.669 "nvmf_subsystem_get_listeners", 00:11:00.669 "nvmf_subsystem_get_qpairs", 00:11:00.669 "nvmf_subsystem_get_controllers", 00:11:00.669 "nvmf_get_stats", 00:11:00.669 "nvmf_get_transports", 00:11:00.669 "nvmf_create_transport", 00:11:00.669 "nvmf_get_targets", 00:11:00.669 "nvmf_delete_target", 00:11:00.669 "nvmf_create_target", 00:11:00.669 "nvmf_subsystem_allow_any_host", 00:11:00.669 "nvmf_subsystem_set_keys", 00:11:00.669 "nvmf_subsystem_remove_host", 00:11:00.669 "nvmf_subsystem_add_host", 00:11:00.669 "nvmf_ns_remove_host", 00:11:00.669 "nvmf_ns_add_host", 00:11:00.669 "nvmf_subsystem_remove_ns", 00:11:00.669 "nvmf_subsystem_set_ns_ana_group", 00:11:00.669 "nvmf_subsystem_add_ns", 00:11:00.669 "nvmf_subsystem_listener_set_ana_state", 00:11:00.669 "nvmf_discovery_get_referrals", 00:11:00.669 "nvmf_discovery_remove_referral", 00:11:00.669 "nvmf_discovery_add_referral", 00:11:00.669 "nvmf_subsystem_remove_listener", 00:11:00.669 "nvmf_subsystem_add_listener", 00:11:00.669 "nvmf_delete_subsystem", 00:11:00.669 "nvmf_create_subsystem", 00:11:00.669 "nvmf_get_subsystems", 00:11:00.669 "env_dpdk_get_mem_stats", 00:11:00.669 "nbd_get_disks", 00:11:00.669 "nbd_stop_disk", 00:11:00.669 "nbd_start_disk", 00:11:00.669 "ublk_recover_disk", 00:11:00.669 "ublk_get_disks", 00:11:00.669 "ublk_stop_disk", 00:11:00.669 "ublk_start_disk", 00:11:00.669 "ublk_destroy_target", 00:11:00.669 "ublk_create_target", 00:11:00.669 "virtio_blk_create_transport", 00:11:00.669 "virtio_blk_get_transports", 00:11:00.669 "vhost_controller_set_coalescing", 00:11:00.669 "vhost_get_controllers", 00:11:00.669 "vhost_delete_controller", 00:11:00.669 "vhost_create_blk_controller", 00:11:00.669 "vhost_scsi_controller_remove_target", 00:11:00.669 "vhost_scsi_controller_add_target", 00:11:00.669 "vhost_start_scsi_controller", 00:11:00.669 "vhost_create_scsi_controller", 00:11:00.669 "thread_set_cpumask", 00:11:00.669 "scheduler_set_options", 00:11:00.669 "framework_get_governor", 00:11:00.669 "framework_get_scheduler", 00:11:00.669 "framework_set_scheduler", 00:11:00.669 "framework_get_reactors", 00:11:00.669 "thread_get_io_channels", 00:11:00.669 "thread_get_pollers", 00:11:00.669 "thread_get_stats", 00:11:00.669 "framework_monitor_context_switch", 00:11:00.669 "spdk_kill_instance", 00:11:00.669 "log_enable_timestamps", 00:11:00.669 "log_get_flags", 00:11:00.669 "log_clear_flag", 00:11:00.669 "log_set_flag", 00:11:00.669 "log_get_level", 00:11:00.669 "log_set_level", 00:11:00.669 "log_get_print_level", 00:11:00.669 "log_set_print_level", 00:11:00.669 "framework_enable_cpumask_locks", 00:11:00.669 "framework_disable_cpumask_locks", 00:11:00.669 "framework_wait_init", 00:11:00.669 "framework_start_init", 00:11:00.669 "scsi_get_devices", 00:11:00.669 "bdev_get_histogram", 00:11:00.669 "bdev_enable_histogram", 00:11:00.669 "bdev_set_qos_limit", 00:11:00.669 "bdev_set_qd_sampling_period", 00:11:00.669 "bdev_get_bdevs", 00:11:00.669 "bdev_reset_iostat", 00:11:00.669 "bdev_get_iostat", 00:11:00.669 "bdev_examine", 00:11:00.669 "bdev_wait_for_examine", 00:11:00.669 "bdev_set_options", 00:11:00.669 "accel_get_stats", 00:11:00.669 "accel_set_options", 00:11:00.669 "accel_set_driver", 00:11:00.669 "accel_crypto_key_destroy", 00:11:00.669 "accel_crypto_keys_get", 00:11:00.669 "accel_crypto_key_create", 00:11:00.669 "accel_assign_opc", 00:11:00.669 "accel_get_module_info", 00:11:00.669 "accel_get_opc_assignments", 00:11:00.669 "vmd_rescan", 00:11:00.669 "vmd_remove_device", 00:11:00.669 "vmd_enable", 00:11:00.669 "sock_get_default_impl", 00:11:00.669 "sock_set_default_impl", 00:11:00.669 "sock_impl_set_options", 00:11:00.669 "sock_impl_get_options", 00:11:00.669 "iobuf_get_stats", 00:11:00.669 "iobuf_set_options", 00:11:00.669 "keyring_get_keys", 00:11:00.669 "framework_get_pci_devices", 00:11:00.669 "framework_get_config", 00:11:00.669 "framework_get_subsystems", 00:11:00.669 "fsdev_set_opts", 00:11:00.669 "fsdev_get_opts", 00:11:00.669 "trace_get_info", 00:11:00.669 "trace_get_tpoint_group_mask", 00:11:00.669 "trace_disable_tpoint_group", 00:11:00.669 "trace_enable_tpoint_group", 00:11:00.669 "trace_clear_tpoint_mask", 00:11:00.669 "trace_set_tpoint_mask", 00:11:00.669 "notify_get_notifications", 00:11:00.669 "notify_get_types", 00:11:00.669 "spdk_get_version", 00:11:00.669 "rpc_get_methods" 00:11:00.669 ] 00:11:00.669 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.669 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:00.669 13:44:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59432 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 59432 ']' 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 59432 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59432 00:11:00.669 killing process with pid 59432 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59432' 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 59432 00:11:00.669 13:44:47 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 59432 00:11:03.973 ************************************ 00:11:03.973 END TEST spdkcli_tcp 00:11:03.973 ************************************ 00:11:03.973 00:11:03.973 real 0m4.713s 00:11:03.973 user 0m8.589s 00:11:03.973 sys 0m0.652s 00:11:03.973 13:44:50 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.973 13:44:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.973 13:44:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:03.973 13:44:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:03.973 13:44:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.973 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:11:03.973 ************************************ 00:11:03.973 START TEST dpdk_mem_utility 00:11:03.973 ************************************ 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:03.973 * Looking for test storage... 00:11:03.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.973 13:44:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:03.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.973 --rc genhtml_branch_coverage=1 00:11:03.973 --rc genhtml_function_coverage=1 00:11:03.973 --rc genhtml_legend=1 00:11:03.973 --rc geninfo_all_blocks=1 00:11:03.973 --rc geninfo_unexecuted_blocks=1 00:11:03.973 00:11:03.973 ' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:03.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.973 --rc genhtml_branch_coverage=1 00:11:03.973 --rc genhtml_function_coverage=1 00:11:03.973 --rc genhtml_legend=1 00:11:03.973 --rc geninfo_all_blocks=1 00:11:03.973 --rc geninfo_unexecuted_blocks=1 00:11:03.973 00:11:03.973 ' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:03.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.973 --rc genhtml_branch_coverage=1 00:11:03.973 --rc genhtml_function_coverage=1 00:11:03.973 --rc genhtml_legend=1 00:11:03.973 --rc geninfo_all_blocks=1 00:11:03.973 --rc geninfo_unexecuted_blocks=1 00:11:03.973 00:11:03.973 ' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:03.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.973 --rc genhtml_branch_coverage=1 00:11:03.973 --rc genhtml_function_coverage=1 00:11:03.973 --rc genhtml_legend=1 00:11:03.973 --rc geninfo_all_blocks=1 00:11:03.973 --rc geninfo_unexecuted_blocks=1 00:11:03.973 00:11:03.973 ' 00:11:03.973 13:44:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:03.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.973 13:44:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59559 00:11:03.973 13:44:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59559 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59559 ']' 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.973 13:44:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.973 13:44:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:03.973 [2024-11-04 13:44:50.721726] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:03.973 [2024-11-04 13:44:50.722132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:11:04.232 [2024-11-04 13:44:50.928704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.232 [2024-11-04 13:44:51.107233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.610 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.610 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:11:05.610 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:05.610 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:05.610 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.610 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:05.611 { 00:11:05.611 "filename": "/tmp/spdk_mem_dump.txt" 00:11:05.611 } 00:11:05.611 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.611 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:05.611 DPDK memory size 816.000000 MiB in 1 heap(s) 00:11:05.611 1 heaps totaling size 816.000000 MiB 00:11:05.611 size: 816.000000 MiB heap id: 0 00:11:05.611 end heaps---------- 00:11:05.611 9 mempools totaling size 595.772034 MiB 00:11:05.611 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:05.611 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:05.611 size: 92.545471 MiB name: bdev_io_59559 00:11:05.611 size: 50.003479 MiB name: msgpool_59559 00:11:05.611 size: 36.509338 MiB name: fsdev_io_59559 00:11:05.611 size: 21.763794 MiB name: PDU_Pool 00:11:05.611 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:05.611 size: 4.133484 MiB name: evtpool_59559 00:11:05.611 size: 0.026123 MiB name: Session_Pool 00:11:05.611 end mempools------- 00:11:05.611 6 memzones totaling size 4.142822 MiB 00:11:05.611 size: 1.000366 MiB name: RG_ring_0_59559 00:11:05.611 size: 1.000366 MiB name: RG_ring_1_59559 00:11:05.611 size: 1.000366 MiB name: RG_ring_4_59559 00:11:05.611 size: 1.000366 MiB name: RG_ring_5_59559 00:11:05.611 size: 0.125366 MiB name: RG_ring_2_59559 00:11:05.611 size: 0.015991 MiB name: RG_ring_3_59559 00:11:05.611 end memzones------- 00:11:05.611 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:05.611 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:11:05.611 list of free elements. size: 16.790649 MiB 00:11:05.611 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:05.611 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:05.611 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:05.611 element at address: 0x200018d00040 with size: 0.999939 MiB 00:11:05.611 element at address: 0x200019100040 with size: 0.999939 MiB 00:11:05.611 element at address: 0x200019200000 with size: 0.999084 MiB 00:11:05.611 element at address: 0x200031e00000 with size: 0.994324 MiB 00:11:05.611 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:05.611 element at address: 0x200018a00000 with size: 0.959656 MiB 00:11:05.611 element at address: 0x200019500040 with size: 0.936401 MiB 00:11:05.611 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:05.611 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:11:05.611 element at address: 0x200000c00000 with size: 0.490173 MiB 00:11:05.611 element at address: 0x200018e00000 with size: 0.487976 MiB 00:11:05.611 element at address: 0x200019600000 with size: 0.485413 MiB 00:11:05.611 element at address: 0x200012c00000 with size: 0.443481 MiB 00:11:05.611 element at address: 0x200028000000 with size: 0.390442 MiB 00:11:05.611 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:05.611 list of standard malloc elements. size: 199.288452 MiB 00:11:05.611 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:05.611 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:05.611 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:11:05.611 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:11:05.611 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:05.611 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:05.611 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:11:05.611 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:05.611 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:05.611 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:11:05.611 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:05.611 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:05.611 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:05.611 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71880 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71980 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c72080 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012c72180 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:11:05.612 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:11:05.612 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200028063f40 with size: 0.000244 MiB 00:11:05.612 element at address: 0x200028064040 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806af80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b080 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b180 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b280 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b380 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b480 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b580 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b680 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b780 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b880 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806b980 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806be80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806c080 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806c180 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806c280 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806c380 with size: 0.000244 MiB 00:11:05.612 element at address: 0x20002806c480 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806c580 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806c680 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806c780 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806c880 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806c980 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d080 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d180 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d280 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d380 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d480 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d580 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d680 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d780 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d880 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806d980 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806da80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806db80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806de80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806df80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e080 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e180 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e280 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e380 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e480 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e580 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e680 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e780 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e880 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806e980 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f080 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f180 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f280 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f380 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f480 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f580 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f680 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f780 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f880 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806f980 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:11:05.613 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:11:05.613 list of memzone associated elements. size: 599.920898 MiB 00:11:05.613 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:11:05.613 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:05.613 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:11:05.613 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:05.613 element at address: 0x200012df4740 with size: 92.045105 MiB 00:11:05.613 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59559_0 00:11:05.613 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:05.613 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59559_0 00:11:05.613 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:05.613 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59559_0 00:11:05.613 element at address: 0x2000197be900 with size: 20.255615 MiB 00:11:05.613 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:05.613 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:11:05.613 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:05.613 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:05.613 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59559_0 00:11:05.613 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:05.613 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59559 00:11:05.613 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:05.613 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59559 00:11:05.613 element at address: 0x200018efde00 with size: 1.008179 MiB 00:11:05.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:05.613 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:11:05.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:05.613 element at address: 0x200018afde00 with size: 1.008179 MiB 00:11:05.613 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:05.613 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:11:05.613 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:05.613 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:05.613 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59559 00:11:05.613 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:05.613 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59559 00:11:05.613 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:11:05.613 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59559 00:11:05.613 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:11:05.613 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59559 00:11:05.613 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:05.613 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59559 00:11:05.613 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:05.613 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59559 00:11:05.613 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:11:05.613 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:05.613 element at address: 0x200012c72280 with size: 0.500549 MiB 00:11:05.613 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:05.613 element at address: 0x20001967c440 with size: 0.250549 MiB 00:11:05.613 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:05.613 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:05.613 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59559 00:11:05.613 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:05.613 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59559 00:11:05.613 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:11:05.613 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:05.613 element at address: 0x200028064140 with size: 0.023804 MiB 00:11:05.613 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:05.613 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:05.613 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59559 00:11:05.613 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:11:05.613 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:05.613 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:05.613 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59559 00:11:05.613 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:05.613 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59559 00:11:05.613 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:05.613 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59559 00:11:05.613 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:11:05.613 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:05.613 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:05.613 13:44:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59559 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59559 ']' 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59559 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59559 00:11:05.613 killing process with pid 59559 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59559' 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59559 00:11:05.613 13:44:52 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59559 00:11:08.896 00:11:08.896 real 0m4.841s 00:11:08.896 user 0m4.807s 00:11:08.896 sys 0m0.638s 00:11:08.896 13:44:55 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.896 ************************************ 00:11:08.896 END TEST dpdk_mem_utility 00:11:08.896 ************************************ 00:11:08.896 13:44:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:08.896 13:44:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:08.896 13:44:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.896 13:44:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.896 13:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:08.896 ************************************ 00:11:08.896 START TEST event 00:11:08.896 ************************************ 00:11:08.896 13:44:55 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:08.896 * Looking for test storage... 00:11:08.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:08.896 13:44:55 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.896 13:44:55 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.896 13:44:55 event -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.896 13:44:55 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.896 13:44:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.896 13:44:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.896 13:44:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.896 13:44:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.896 13:44:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.896 13:44:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.896 13:44:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.896 13:44:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.896 13:44:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.896 13:44:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.896 13:44:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.896 13:44:55 event -- scripts/common.sh@344 -- # case "$op" in 00:11:08.896 13:44:55 event -- scripts/common.sh@345 -- # : 1 00:11:08.896 13:44:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.897 13:44:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.897 13:44:55 event -- scripts/common.sh@365 -- # decimal 1 00:11:08.897 13:44:55 event -- scripts/common.sh@353 -- # local d=1 00:11:08.897 13:44:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.897 13:44:55 event -- scripts/common.sh@355 -- # echo 1 00:11:08.897 13:44:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.897 13:44:55 event -- scripts/common.sh@366 -- # decimal 2 00:11:08.897 13:44:55 event -- scripts/common.sh@353 -- # local d=2 00:11:08.897 13:44:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.897 13:44:55 event -- scripts/common.sh@355 -- # echo 2 00:11:08.897 13:44:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.897 13:44:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.897 13:44:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.897 13:44:55 event -- scripts/common.sh@368 -- # return 0 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.897 --rc genhtml_branch_coverage=1 00:11:08.897 --rc genhtml_function_coverage=1 00:11:08.897 --rc genhtml_legend=1 00:11:08.897 --rc geninfo_all_blocks=1 00:11:08.897 --rc geninfo_unexecuted_blocks=1 00:11:08.897 00:11:08.897 ' 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.897 --rc genhtml_branch_coverage=1 00:11:08.897 --rc genhtml_function_coverage=1 00:11:08.897 --rc genhtml_legend=1 00:11:08.897 --rc geninfo_all_blocks=1 00:11:08.897 --rc geninfo_unexecuted_blocks=1 00:11:08.897 00:11:08.897 ' 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.897 --rc genhtml_branch_coverage=1 00:11:08.897 --rc genhtml_function_coverage=1 00:11:08.897 --rc genhtml_legend=1 00:11:08.897 --rc geninfo_all_blocks=1 00:11:08.897 --rc geninfo_unexecuted_blocks=1 00:11:08.897 00:11:08.897 ' 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.897 --rc genhtml_branch_coverage=1 00:11:08.897 --rc genhtml_function_coverage=1 00:11:08.897 --rc genhtml_legend=1 00:11:08.897 --rc geninfo_all_blocks=1 00:11:08.897 --rc geninfo_unexecuted_blocks=1 00:11:08.897 00:11:08.897 ' 00:11:08.897 13:44:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:08.897 13:44:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:08.897 13:44:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:11:08.897 13:44:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.897 13:44:55 event -- common/autotest_common.sh@10 -- # set +x 00:11:08.897 ************************************ 00:11:08.897 START TEST event_perf 00:11:08.897 ************************************ 00:11:08.897 13:44:55 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:08.897 Running I/O for 1 seconds...[2024-11-04 13:44:55.528522] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:08.897 [2024-11-04 13:44:55.528795] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59678 ] 00:11:08.897 [2024-11-04 13:44:55.723529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.155 [2024-11-04 13:44:55.909255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.155 [2024-11-04 13:44:55.909374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.155 [2024-11-04 13:44:55.909541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.155 [2024-11-04 13:44:55.909596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.528 Running I/O for 1 seconds... 00:11:10.528 lcore 0: 170590 00:11:10.528 lcore 1: 170589 00:11:10.528 lcore 2: 170591 00:11:10.528 lcore 3: 170590 00:11:10.528 done. 00:11:10.528 00:11:10.528 real 0m1.717s 00:11:10.528 user 0m4.450s 00:11:10.528 sys 0m0.140s 00:11:10.528 13:44:57 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.528 13:44:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:10.528 ************************************ 00:11:10.528 END TEST event_perf 00:11:10.528 ************************************ 00:11:10.528 13:44:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:10.528 13:44:57 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.528 13:44:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.528 13:44:57 event -- common/autotest_common.sh@10 -- # set +x 00:11:10.528 ************************************ 00:11:10.528 START TEST event_reactor 00:11:10.528 ************************************ 00:11:10.528 13:44:57 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:10.528 [2024-11-04 13:44:57.309094] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:10.528 [2024-11-04 13:44:57.309440] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:11:10.786 [2024-11-04 13:44:57.498486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.786 [2024-11-04 13:44:57.669999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.157 test_start 00:11:12.157 oneshot 00:11:12.157 tick 100 00:11:12.157 tick 100 00:11:12.157 tick 250 00:11:12.157 tick 100 00:11:12.157 tick 100 00:11:12.157 tick 100 00:11:12.157 tick 250 00:11:12.157 tick 500 00:11:12.157 tick 100 00:11:12.157 tick 100 00:11:12.157 tick 250 00:11:12.157 tick 100 00:11:12.157 tick 100 00:11:12.157 test_end 00:11:12.157 ************************************ 00:11:12.157 END TEST event_reactor 00:11:12.157 ************************************ 00:11:12.157 00:11:12.157 real 0m1.657s 00:11:12.157 user 0m1.426s 00:11:12.157 sys 0m0.120s 00:11:12.157 13:44:58 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.157 13:44:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:12.157 13:44:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:12.157 13:44:58 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:12.157 13:44:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.157 13:44:58 event -- common/autotest_common.sh@10 -- # set +x 00:11:12.157 ************************************ 00:11:12.157 START TEST event_reactor_perf 00:11:12.157 ************************************ 00:11:12.157 13:44:58 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:12.158 [2024-11-04 13:44:59.058949] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:12.158 [2024-11-04 13:44:59.059162] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:11:12.416 [2024-11-04 13:44:59.249087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.674 [2024-11-04 13:44:59.377869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.049 test_start 00:11:14.049 test_end 00:11:14.049 Performance: 347739 events per second 00:11:14.049 ************************************ 00:11:14.049 END TEST event_reactor_perf 00:11:14.049 ************************************ 00:11:14.049 00:11:14.049 real 0m1.629s 00:11:14.049 user 0m1.400s 00:11:14.049 sys 0m0.119s 00:11:14.049 13:45:00 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.049 13:45:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:14.049 13:45:00 event -- event/event.sh@49 -- # uname -s 00:11:14.049 13:45:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:14.049 13:45:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:14.049 13:45:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.049 13:45:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.049 13:45:00 event -- common/autotest_common.sh@10 -- # set +x 00:11:14.049 ************************************ 00:11:14.049 START TEST event_scheduler 00:11:14.049 ************************************ 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:14.049 * Looking for test storage... 00:11:14.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.049 13:45:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.049 --rc genhtml_branch_coverage=1 00:11:14.049 --rc genhtml_function_coverage=1 00:11:14.049 --rc genhtml_legend=1 00:11:14.049 --rc geninfo_all_blocks=1 00:11:14.049 --rc geninfo_unexecuted_blocks=1 00:11:14.049 00:11:14.049 ' 00:11:14.049 13:45:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.050 --rc genhtml_branch_coverage=1 00:11:14.050 --rc genhtml_function_coverage=1 00:11:14.050 --rc genhtml_legend=1 00:11:14.050 --rc geninfo_all_blocks=1 00:11:14.050 --rc geninfo_unexecuted_blocks=1 00:11:14.050 00:11:14.050 ' 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.050 --rc genhtml_branch_coverage=1 00:11:14.050 --rc genhtml_function_coverage=1 00:11:14.050 --rc genhtml_legend=1 00:11:14.050 --rc geninfo_all_blocks=1 00:11:14.050 --rc geninfo_unexecuted_blocks=1 00:11:14.050 00:11:14.050 ' 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.050 --rc genhtml_branch_coverage=1 00:11:14.050 --rc genhtml_function_coverage=1 00:11:14.050 --rc genhtml_legend=1 00:11:14.050 --rc geninfo_all_blocks=1 00:11:14.050 --rc geninfo_unexecuted_blocks=1 00:11:14.050 00:11:14.050 ' 00:11:14.050 13:45:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:14.050 13:45:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59830 00:11:14.050 13:45:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:14.050 13:45:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:14.050 13:45:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59830 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59830 ']' 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.050 13:45:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:14.309 [2024-11-04 13:45:01.045848] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:14.309 [2024-11-04 13:45:01.046269] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:11:14.567 [2024-11-04 13:45:01.238788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.567 [2024-11-04 13:45:01.374442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.567 [2024-11-04 13:45:01.374558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.567 [2024-11-04 13:45:01.374719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.567 [2024-11-04 13:45:01.374746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:11:15.133 13:45:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:11:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:15.133 POWER: Cannot set governor of lcore 0 to performance 00:11:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:11:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:11:15.133 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:15.133 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:15.133 POWER: Unable to set Power Management Environment for lcore 0 00:11:15.133 [2024-11-04 13:45:02.033090] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:11:15.133 [2024-11-04 13:45:02.033119] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:11:15.133 [2024-11-04 13:45:02.033132] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:15.133 [2024-11-04 13:45:02.033154] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:15.133 [2024-11-04 13:45:02.033166] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:15.133 [2024-11-04 13:45:02.033179] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.133 13:45:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.133 13:45:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 [2024-11-04 13:45:02.395376] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:15.700 13:45:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:15.700 13:45:02 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:15.700 13:45:02 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 ************************************ 00:11:15.700 START TEST scheduler_create_thread 00:11:15.700 ************************************ 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 2 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 3 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 4 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 5 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 6 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 7 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 8 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 9 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 10 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 13:45:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 13:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.076 13:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:17.076 13:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:17.076 13:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.076 13:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:18.459 ************************************ 00:11:18.459 END TEST scheduler_create_thread 00:11:18.459 ************************************ 00:11:18.459 13:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.459 00:11:18.459 real 0m2.619s 00:11:18.459 user 0m0.013s 00:11:18.459 sys 0m0.005s 00:11:18.459 13:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.459 13:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:18.459 13:45:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:18.459 13:45:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59830 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59830 ']' 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59830 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59830 00:11:18.459 killing process with pid 59830 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59830' 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59830 00:11:18.459 13:45:05 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59830 00:11:18.718 [2024-11-04 13:45:05.507533] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:20.090 00:11:20.090 real 0m6.188s 00:11:20.090 user 0m10.835s 00:11:20.090 sys 0m0.508s 00:11:20.090 13:45:06 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.090 ************************************ 00:11:20.090 END TEST event_scheduler 00:11:20.090 ************************************ 00:11:20.090 13:45:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:20.090 13:45:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:20.090 13:45:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:20.090 13:45:06 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:20.090 13:45:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.090 13:45:06 event -- common/autotest_common.sh@10 -- # set +x 00:11:20.090 ************************************ 00:11:20.090 START TEST app_repeat 00:11:20.090 ************************************ 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:20.090 Process app_repeat pid: 59946 00:11:20.090 spdk_app_start Round 0 00:11:20.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59946 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59946' 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:20.090 13:45:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59946 /var/tmp/spdk-nbd.sock 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59946 ']' 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.090 13:45:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:20.090 [2024-11-04 13:45:06.998999] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:20.090 [2024-11-04 13:45:06.999163] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:11:20.353 [2024-11-04 13:45:07.200153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:20.623 [2024-11-04 13:45:07.381257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.623 [2024-11-04 13:45:07.381281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.190 13:45:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:21.190 13:45:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:21.190 13:45:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:21.448 Malloc0 00:11:21.448 13:45:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:21.707 Malloc1 00:11:21.965 13:45:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:21.965 13:45:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.965 13:45:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:21.965 13:45:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:21.966 13:45:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:21.966 /dev/nbd0 00:11:22.224 13:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:22.224 13:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.225 1+0 records in 00:11:22.225 1+0 records out 00:11:22.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321658 s, 12.7 MB/s 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:22.225 13:45:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:22.225 13:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.225 13:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.225 13:45:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:22.483 /dev/nbd1 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.483 1+0 records in 00:11:22.483 1+0 records out 00:11:22.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046569 s, 8.8 MB/s 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:22.483 13:45:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.483 13:45:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:22.747 { 00:11:22.747 "nbd_device": "/dev/nbd0", 00:11:22.747 "bdev_name": "Malloc0" 00:11:22.747 }, 00:11:22.747 { 00:11:22.747 "nbd_device": "/dev/nbd1", 00:11:22.747 "bdev_name": "Malloc1" 00:11:22.747 } 00:11:22.747 ]' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:22.747 { 00:11:22.747 "nbd_device": "/dev/nbd0", 00:11:22.747 "bdev_name": "Malloc0" 00:11:22.747 }, 00:11:22.747 { 00:11:22.747 "nbd_device": "/dev/nbd1", 00:11:22.747 "bdev_name": "Malloc1" 00:11:22.747 } 00:11:22.747 ]' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:22.747 /dev/nbd1' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:22.747 /dev/nbd1' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:22.747 256+0 records in 00:11:22.747 256+0 records out 00:11:22.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815911 s, 129 MB/s 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:22.747 256+0 records in 00:11:22.747 256+0 records out 00:11:22.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308839 s, 34.0 MB/s 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.747 13:45:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:23.029 256+0 records in 00:11:23.029 256+0 records out 00:11:23.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360669 s, 29.1 MB/s 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.029 13:45:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.287 13:45:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.545 13:45:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:23.803 13:45:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:23.804 13:45:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:23.804 13:45:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:23.804 13:45:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:24.368 13:45:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:25.744 [2024-11-04 13:45:12.356773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:25.744 [2024-11-04 13:45:12.478198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.744 [2024-11-04 13:45:12.478202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.003 [2024-11-04 13:45:12.688111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:26.003 [2024-11-04 13:45:12.688471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:27.377 spdk_app_start Round 1 00:11:27.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:27.377 13:45:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:27.377 13:45:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:27.377 13:45:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59946 /var/tmp/spdk-nbd.sock 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59946 ']' 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:27.377 13:45:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:27.636 13:45:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:27.636 13:45:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:27.636 13:45:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:27.894 Malloc0 00:11:27.894 13:45:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:28.461 Malloc1 00:11:28.461 13:45:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.461 13:45:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:28.719 /dev/nbd0 00:11:28.719 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.719 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:28.719 13:45:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:28.719 1+0 records in 00:11:28.720 1+0 records out 00:11:28.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262958 s, 15.6 MB/s 00:11:28.720 13:45:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.720 13:45:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:28.720 13:45:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.720 13:45:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:28.720 13:45:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:28.720 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.720 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.720 13:45:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:28.978 /dev/nbd1 00:11:28.978 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:28.978 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:28.978 13:45:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:28.978 1+0 records in 00:11:28.978 1+0 records out 00:11:28.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304156 s, 13.5 MB/s 00:11:28.979 13:45:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.979 13:45:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:28.979 13:45:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.979 13:45:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:28.979 13:45:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:28.979 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.979 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.979 13:45:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.979 13:45:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.979 13:45:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:29.238 { 00:11:29.238 "nbd_device": "/dev/nbd0", 00:11:29.238 "bdev_name": "Malloc0" 00:11:29.238 }, 00:11:29.238 { 00:11:29.238 "nbd_device": "/dev/nbd1", 00:11:29.238 "bdev_name": "Malloc1" 00:11:29.238 } 00:11:29.238 ]' 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:29.238 { 00:11:29.238 "nbd_device": "/dev/nbd0", 00:11:29.238 "bdev_name": "Malloc0" 00:11:29.238 }, 00:11:29.238 { 00:11:29.238 "nbd_device": "/dev/nbd1", 00:11:29.238 "bdev_name": "Malloc1" 00:11:29.238 } 00:11:29.238 ]' 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:29.238 /dev/nbd1' 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:29.238 /dev/nbd1' 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:29.238 13:45:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:29.496 256+0 records in 00:11:29.496 256+0 records out 00:11:29.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00979372 s, 107 MB/s 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.496 256+0 records in 00:11:29.496 256+0 records out 00:11:29.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308605 s, 34.0 MB/s 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.496 256+0 records in 00:11:29.496 256+0 records out 00:11:29.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283714 s, 37.0 MB/s 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.496 13:45:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.755 13:45:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.013 13:45:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:30.271 13:45:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:30.271 13:45:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:30.836 13:45:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:32.208 [2024-11-04 13:45:18.815200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.208 [2024-11-04 13:45:18.935380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.208 [2024-11-04 13:45:18.935389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.465 [2024-11-04 13:45:19.151641] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:32.465 [2024-11-04 13:45:19.151741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:33.877 13:45:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:33.877 spdk_app_start Round 2 00:11:33.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:33.877 13:45:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:33.877 13:45:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59946 /var/tmp/spdk-nbd.sock 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59946 ']' 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:33.877 13:45:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:34.134 13:45:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.134 13:45:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:34.134 13:45:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:34.392 Malloc0 00:11:34.392 13:45:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:34.651 Malloc1 00:11:34.651 13:45:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:34.651 13:45:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:34.909 /dev/nbd0 00:11:34.909 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:34.909 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:34.909 1+0 records in 00:11:34.909 1+0 records out 00:11:34.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342601 s, 12.0 MB/s 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:34.909 13:45:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:34.909 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.909 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:34.909 13:45:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:35.167 /dev/nbd1 00:11:35.167 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.167 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.167 13:45:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:35.167 13:45:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:11:35.167 13:45:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:35.167 13:45:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:35.167 13:45:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:35.167 1+0 records in 00:11:35.167 1+0 records out 00:11:35.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340132 s, 12.0 MB/s 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:35.167 13:45:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:11:35.167 13:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.167 13:45:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.167 13:45:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:35.167 13:45:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.167 13:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:35.424 13:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:35.424 { 00:11:35.424 "nbd_device": "/dev/nbd0", 00:11:35.424 "bdev_name": "Malloc0" 00:11:35.424 }, 00:11:35.424 { 00:11:35.425 "nbd_device": "/dev/nbd1", 00:11:35.425 "bdev_name": "Malloc1" 00:11:35.425 } 00:11:35.425 ]' 00:11:35.425 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:35.425 { 00:11:35.425 "nbd_device": "/dev/nbd0", 00:11:35.425 "bdev_name": "Malloc0" 00:11:35.425 }, 00:11:35.425 { 00:11:35.425 "nbd_device": "/dev/nbd1", 00:11:35.425 "bdev_name": "Malloc1" 00:11:35.425 } 00:11:35.425 ]' 00:11:35.425 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:35.683 /dev/nbd1' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:35.683 /dev/nbd1' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:35.683 256+0 records in 00:11:35.683 256+0 records out 00:11:35.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110254 s, 95.1 MB/s 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:35.683 256+0 records in 00:11:35.683 256+0 records out 00:11:35.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195639 s, 53.6 MB/s 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:35.683 256+0 records in 00:11:35.683 256+0 records out 00:11:35.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282999 s, 37.1 MB/s 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.683 13:45:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.940 13:45:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.941 13:45:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:35.941 13:45:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.941 13:45:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.941 13:45:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.199 13:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.199 13:45:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.199 13:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.199 13:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:36.458 13:45:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:36.458 13:45:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:37.023 13:45:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:38.393 [2024-11-04 13:45:24.912807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.393 [2024-11-04 13:45:25.036443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.393 [2024-11-04 13:45:25.036444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.393 [2024-11-04 13:45:25.244250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:38.393 [2024-11-04 13:45:25.244366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:39.766 13:45:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59946 /var/tmp/spdk-nbd.sock 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59946 ']' 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.766 13:45:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:11:40.331 13:45:26 event.app_repeat -- event/event.sh@39 -- # killprocess 59946 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59946 ']' 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59946 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59946 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59946' 00:11:40.331 killing process with pid 59946 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59946 00:11:40.331 13:45:26 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59946 00:11:41.267 spdk_app_start is called in Round 0. 00:11:41.267 Shutdown signal received, stop current app iteration 00:11:41.267 Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 reinitialization... 00:11:41.267 spdk_app_start is called in Round 1. 00:11:41.267 Shutdown signal received, stop current app iteration 00:11:41.267 Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 reinitialization... 00:11:41.267 spdk_app_start is called in Round 2. 00:11:41.267 Shutdown signal received, stop current app iteration 00:11:41.267 Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 reinitialization... 00:11:41.267 spdk_app_start is called in Round 3. 00:11:41.267 Shutdown signal received, stop current app iteration 00:11:41.267 13:45:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:41.267 13:45:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:41.267 ************************************ 00:11:41.267 END TEST app_repeat 00:11:41.267 ************************************ 00:11:41.267 00:11:41.267 real 0m21.191s 00:11:41.267 user 0m45.870s 00:11:41.267 sys 0m3.406s 00:11:41.267 13:45:28 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.267 13:45:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:41.267 13:45:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:41.267 13:45:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:41.267 13:45:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:41.267 13:45:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.267 13:45:28 event -- common/autotest_common.sh@10 -- # set +x 00:11:41.267 ************************************ 00:11:41.267 START TEST cpu_locks 00:11:41.267 ************************************ 00:11:41.267 13:45:28 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:41.525 * Looking for test storage... 00:11:41.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.525 13:45:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.525 --rc genhtml_branch_coverage=1 00:11:41.525 --rc genhtml_function_coverage=1 00:11:41.525 --rc genhtml_legend=1 00:11:41.525 --rc geninfo_all_blocks=1 00:11:41.525 --rc geninfo_unexecuted_blocks=1 00:11:41.525 00:11:41.525 ' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.525 --rc genhtml_branch_coverage=1 00:11:41.525 --rc genhtml_function_coverage=1 00:11:41.525 --rc genhtml_legend=1 00:11:41.525 --rc geninfo_all_blocks=1 00:11:41.525 --rc geninfo_unexecuted_blocks=1 00:11:41.525 00:11:41.525 ' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.525 --rc genhtml_branch_coverage=1 00:11:41.525 --rc genhtml_function_coverage=1 00:11:41.525 --rc genhtml_legend=1 00:11:41.525 --rc geninfo_all_blocks=1 00:11:41.525 --rc geninfo_unexecuted_blocks=1 00:11:41.525 00:11:41.525 ' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.525 --rc genhtml_branch_coverage=1 00:11:41.525 --rc genhtml_function_coverage=1 00:11:41.525 --rc genhtml_legend=1 00:11:41.525 --rc geninfo_all_blocks=1 00:11:41.525 --rc geninfo_unexecuted_blocks=1 00:11:41.525 00:11:41.525 ' 00:11:41.525 13:45:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:41.525 13:45:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:41.525 13:45:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:41.525 13:45:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:41.525 13:45:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.526 13:45:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.526 ************************************ 00:11:41.526 START TEST default_locks 00:11:41.526 ************************************ 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60413 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60413 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60413 ']' 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:41.526 13:45:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.783 [2024-11-04 13:45:28.544361] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:41.783 [2024-11-04 13:45:28.544551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60413 ] 00:11:42.041 [2024-11-04 13:45:28.756797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.041 [2024-11-04 13:45:28.932977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.412 13:45:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.412 13:45:29 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:11:43.412 13:45:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60413 00:11:43.412 13:45:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60413 00:11:43.412 13:45:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60413 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 60413 ']' 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 60413 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60413 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:43.671 killing process with pid 60413 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60413' 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 60413 00:11:43.671 13:45:30 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 60413 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60413 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60413 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60413 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60413 ']' 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.195 ERROR: process (pid: 60413) is no longer running 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:46.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60413) - No such process 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:46.195 00:11:46.195 real 0m4.684s 00:11:46.195 user 0m4.621s 00:11:46.195 sys 0m0.790s 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.195 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:46.195 ************************************ 00:11:46.195 END TEST default_locks 00:11:46.195 ************************************ 00:11:46.195 13:45:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:46.195 13:45:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:46.195 13:45:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.195 13:45:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:46.454 ************************************ 00:11:46.454 START TEST default_locks_via_rpc 00:11:46.454 ************************************ 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60494 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60494 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60494 ']' 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.454 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.454 [2024-11-04 13:45:33.261988] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:46.454 [2024-11-04 13:45:33.262173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:11:46.712 [2024-11-04 13:45:33.464803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.970 [2024-11-04 13:45:33.651507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60494 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60494 00:11:47.904 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60494 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 60494 ']' 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 60494 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60494 00:11:48.470 killing process with pid 60494 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60494' 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 60494 00:11:48.470 13:45:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 60494 00:11:51.753 00:11:51.753 real 0m4.841s 00:11:51.753 user 0m4.990s 00:11:51.753 sys 0m0.760s 00:11:51.753 13:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.753 ************************************ 00:11:51.753 END TEST default_locks_via_rpc 00:11:51.753 ************************************ 00:11:51.753 13:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.753 13:45:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:51.753 13:45:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:51.753 13:45:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.753 13:45:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:51.753 ************************************ 00:11:51.753 START TEST non_locking_app_on_locked_coremask 00:11:51.753 ************************************ 00:11:51.753 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:11:51.753 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60574 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60574 /var/tmp/spdk.sock 00:11:51.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60574 ']' 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.754 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.754 [2024-11-04 13:45:38.166840] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:51.754 [2024-11-04 13:45:38.167020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:11:51.754 [2024-11-04 13:45:38.369001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.754 [2024-11-04 13:45:38.527888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60600 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60600 /var/tmp/spdk2.sock 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60600 ']' 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:52.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:52.688 13:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:52.956 [2024-11-04 13:45:39.724761] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:11:52.956 [2024-11-04 13:45:39.725232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:11:53.234 [2024-11-04 13:45:39.944718] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:53.234 [2024-11-04 13:45:39.944797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.491 [2024-11-04 13:45:40.229225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.018 13:45:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.018 13:45:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:56.018 13:45:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60574 00:11:56.018 13:45:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60574 00:11:56.018 13:45:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60574 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60574 ']' 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60574 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60574 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.951 killing process with pid 60574 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60574' 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60574 00:11:56.951 13:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60574 00:12:02.215 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60600 00:12:02.215 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60600 ']' 00:12:02.215 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60600 00:12:02.216 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:02.216 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.216 13:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60600 00:12:02.216 13:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.216 13:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.216 13:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60600' 00:12:02.216 killing process with pid 60600 00:12:02.216 13:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60600 00:12:02.216 13:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60600 00:12:05.496 00:12:05.496 real 0m13.681s 00:12:05.496 user 0m14.455s 00:12:05.496 sys 0m1.716s 00:12:05.496 13:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.496 13:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:05.496 ************************************ 00:12:05.496 END TEST non_locking_app_on_locked_coremask 00:12:05.496 ************************************ 00:12:05.496 13:45:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:05.496 13:45:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:05.496 13:45:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.496 13:45:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:05.496 ************************************ 00:12:05.496 START TEST locking_app_on_unlocked_coremask 00:12:05.496 ************************************ 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:12:05.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60766 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60766 /var/tmp/spdk.sock 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60766 ']' 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:05.496 13:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:05.497 [2024-11-04 13:45:51.900311] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:05.497 [2024-11-04 13:45:51.900496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60766 ] 00:12:05.497 [2024-11-04 13:45:52.092956] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:05.497 [2024-11-04 13:45:52.093030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.497 [2024-11-04 13:45:52.222465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60787 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60787 /var/tmp/spdk2.sock 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60787 ']' 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:06.433 13:45:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:06.692 [2024-11-04 13:45:53.374973] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:06.692 [2024-11-04 13:45:53.375401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:12:06.692 [2024-11-04 13:45:53.587001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.950 [2024-11-04 13:45:53.847971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.480 13:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.480 13:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:09.480 13:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60787 00:12:09.480 13:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60787 00:12:09.480 13:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60766 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60766 ']' 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60766 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60766 00:12:10.416 killing process with pid 60766 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60766' 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60766 00:12:10.416 13:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60766 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60787 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60787 ']' 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60787 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.681 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60787 00:12:15.978 killing process with pid 60787 00:12:15.978 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.978 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.978 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60787' 00:12:15.978 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60787 00:12:15.978 13:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60787 00:12:18.551 00:12:18.551 real 0m13.416s 00:12:18.551 user 0m14.167s 00:12:18.551 sys 0m1.671s 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 ************************************ 00:12:18.551 END TEST locking_app_on_unlocked_coremask 00:12:18.551 ************************************ 00:12:18.551 13:46:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:18.551 13:46:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:18.551 13:46:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.551 13:46:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 ************************************ 00:12:18.551 START TEST locking_app_on_locked_coremask 00:12:18.551 ************************************ 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:12:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60952 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60952 /var/tmp/spdk.sock 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60952 ']' 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.551 13:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 [2024-11-04 13:46:05.342263] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:18.551 [2024-11-04 13:46:05.342413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60952 ] 00:12:18.809 [2024-11-04 13:46:05.530925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.809 [2024-11-04 13:46:05.692706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60968 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60968 /var/tmp/spdk2.sock 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60968 /var/tmp/spdk2.sock 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60968 /var/tmp/spdk2.sock 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60968 ']' 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:19.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:19.743 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.744 13:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:20.002 [2024-11-04 13:46:06.752404] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:20.002 [2024-11-04 13:46:06.752883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60968 ] 00:12:20.328 [2024-11-04 13:46:06.955075] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60952 has claimed it. 00:12:20.328 [2024-11-04 13:46:06.955172] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:20.591 ERROR: process (pid: 60968) is no longer running 00:12:20.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60968) - No such process 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60952 00:12:20.591 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60952 00:12:20.592 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60952 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60952 ']' 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60952 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60952 00:12:21.159 killing process with pid 60952 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60952' 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60952 00:12:21.159 13:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60952 00:12:24.449 00:12:24.449 real 0m5.553s 00:12:24.449 user 0m5.916s 00:12:24.449 sys 0m0.997s 00:12:24.449 13:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.449 ************************************ 00:12:24.449 END TEST locking_app_on_locked_coremask 00:12:24.449 13:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:24.449 ************************************ 00:12:24.449 13:46:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:24.449 13:46:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:24.449 13:46:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.449 13:46:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:24.449 ************************************ 00:12:24.449 START TEST locking_overlapped_coremask 00:12:24.449 ************************************ 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61043 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61043 /var/tmp/spdk.sock 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61043 ']' 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:24.449 13:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:24.449 [2024-11-04 13:46:10.996245] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:24.450 [2024-11-04 13:46:10.997974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:12:24.450 [2024-11-04 13:46:11.208234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.707 [2024-11-04 13:46:11.392631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.707 [2024-11-04 13:46:11.392731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.707 [2024-11-04 13:46:11.392751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.642 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:25.642 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:12:25.642 13:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61072 00:12:25.642 13:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:25.642 13:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61072 /var/tmp/spdk2.sock 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61072 /var/tmp/spdk2.sock 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61072 /var/tmp/spdk2.sock 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61072 ']' 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:25.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:25.643 13:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:25.901 [2024-11-04 13:46:12.673706] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:25.901 [2024-11-04 13:46:12.674191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61072 ] 00:12:26.159 [2024-11-04 13:46:12.896234] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61043 has claimed it. 00:12:26.159 [2024-11-04 13:46:12.896332] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:26.725 ERROR: process (pid: 61072) is no longer running 00:12:26.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61072) - No such process 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61043 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 61043 ']' 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 61043 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61043 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61043' 00:12:26.725 killing process with pid 61043 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 61043 00:12:26.725 13:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 61043 00:12:30.032 ************************************ 00:12:30.032 END TEST locking_overlapped_coremask 00:12:30.032 ************************************ 00:12:30.032 00:12:30.032 real 0m5.387s 00:12:30.032 user 0m14.685s 00:12:30.032 sys 0m0.775s 00:12:30.032 13:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.032 13:46:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:30.032 13:46:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:30.032 13:46:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:30.032 13:46:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:30.032 13:46:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:30.032 ************************************ 00:12:30.032 START TEST locking_overlapped_coremask_via_rpc 00:12:30.032 ************************************ 00:12:30.032 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:12:30.032 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61142 00:12:30.032 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61142 /var/tmp/spdk.sock 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61142 ']' 00:12:30.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:30.033 13:46:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.033 [2024-11-04 13:46:16.443766] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:30.033 [2024-11-04 13:46:16.443945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:12:30.033 [2024-11-04 13:46:16.650137] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:30.033 [2024-11-04 13:46:16.650232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.033 [2024-11-04 13:46:16.836039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.033 [2024-11-04 13:46:16.836127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.033 [2024-11-04 13:46:16.836134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61165 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61165 /var/tmp/spdk2.sock 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61165 ']' 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:30.966 13:46:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.224 [2024-11-04 13:46:17.995434] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:31.224 [2024-11-04 13:46:17.995910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:12:31.513 [2024-11-04 13:46:18.211004] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:31.513 [2024-11-04 13:46:18.211119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.771 [2024-11-04 13:46:18.484062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.771 [2024-11-04 13:46:18.487715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.771 [2024-11-04 13:46:18.487737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.300 [2024-11-04 13:46:20.870814] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61142 has claimed it. 00:12:34.300 request: 00:12:34.300 { 00:12:34.300 "method": "framework_enable_cpumask_locks", 00:12:34.300 "req_id": 1 00:12:34.300 } 00:12:34.300 Got JSON-RPC error response 00:12:34.300 response: 00:12:34.300 { 00:12:34.300 "code": -32603, 00:12:34.300 "message": "Failed to claim CPU core: 2" 00:12:34.300 } 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61142 /var/tmp/spdk.sock 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61142 ']' 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.300 13:46:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61165 /var/tmp/spdk2.sock 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61165 ']' 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:34.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:34.300 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.301 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.866 ************************************ 00:12:34.866 END TEST locking_overlapped_coremask_via_rpc 00:12:34.866 ************************************ 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:34.866 00:12:34.866 real 0m5.202s 00:12:34.866 user 0m1.932s 00:12:34.866 sys 0m0.295s 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.866 13:46:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.866 13:46:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:34.866 13:46:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61142 ]] 00:12:34.866 13:46:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61142 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61142 ']' 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61142 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61142 00:12:34.866 killing process with pid 61142 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61142' 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61142 00:12:34.866 13:46:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61142 00:12:38.183 13:46:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61165 ]] 00:12:38.183 13:46:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61165 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61165 ']' 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61165 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61165 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:12:38.183 killing process with pid 61165 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61165' 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61165 00:12:38.183 13:46:24 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61165 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:40.729 Process with pid 61142 is not found 00:12:40.729 Process with pid 61165 is not found 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61142 ]] 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61142 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61142 ']' 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61142 00:12:40.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61142) - No such process 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61142 is not found' 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61165 ]] 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61165 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61165 ']' 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61165 00:12:40.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61165) - No such process 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61165 is not found' 00:12:40.729 13:46:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:40.729 ************************************ 00:12:40.729 END TEST cpu_locks 00:12:40.729 ************************************ 00:12:40.729 00:12:40.729 real 0m59.039s 00:12:40.729 user 1m42.520s 00:12:40.729 sys 0m8.299s 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.729 13:46:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 ************************************ 00:12:40.729 END TEST event 00:12:40.729 ************************************ 00:12:40.729 00:12:40.729 real 1m31.983s 00:12:40.729 user 2m46.750s 00:12:40.729 sys 0m12.881s 00:12:40.729 13:46:27 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.729 13:46:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 13:46:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:40.729 13:46:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:40.729 13:46:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.729 13:46:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 ************************************ 00:12:40.729 START TEST thread 00:12:40.729 ************************************ 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:40.729 * Looking for test storage... 00:12:40.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:40.729 13:46:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.729 13:46:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.729 13:46:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.729 13:46:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.729 13:46:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.729 13:46:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.729 13:46:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.729 13:46:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.729 13:46:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.729 13:46:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.729 13:46:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.729 13:46:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:40.729 13:46:27 thread -- scripts/common.sh@345 -- # : 1 00:12:40.729 13:46:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.729 13:46:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.729 13:46:27 thread -- scripts/common.sh@365 -- # decimal 1 00:12:40.729 13:46:27 thread -- scripts/common.sh@353 -- # local d=1 00:12:40.729 13:46:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.729 13:46:27 thread -- scripts/common.sh@355 -- # echo 1 00:12:40.729 13:46:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.729 13:46:27 thread -- scripts/common.sh@366 -- # decimal 2 00:12:40.729 13:46:27 thread -- scripts/common.sh@353 -- # local d=2 00:12:40.729 13:46:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.729 13:46:27 thread -- scripts/common.sh@355 -- # echo 2 00:12:40.729 13:46:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.729 13:46:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.729 13:46:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.729 13:46:27 thread -- scripts/common.sh@368 -- # return 0 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:40.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.729 --rc genhtml_branch_coverage=1 00:12:40.729 --rc genhtml_function_coverage=1 00:12:40.729 --rc genhtml_legend=1 00:12:40.729 --rc geninfo_all_blocks=1 00:12:40.729 --rc geninfo_unexecuted_blocks=1 00:12:40.729 00:12:40.729 ' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:40.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.729 --rc genhtml_branch_coverage=1 00:12:40.729 --rc genhtml_function_coverage=1 00:12:40.729 --rc genhtml_legend=1 00:12:40.729 --rc geninfo_all_blocks=1 00:12:40.729 --rc geninfo_unexecuted_blocks=1 00:12:40.729 00:12:40.729 ' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:40.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.729 --rc genhtml_branch_coverage=1 00:12:40.729 --rc genhtml_function_coverage=1 00:12:40.729 --rc genhtml_legend=1 00:12:40.729 --rc geninfo_all_blocks=1 00:12:40.729 --rc geninfo_unexecuted_blocks=1 00:12:40.729 00:12:40.729 ' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:40.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.729 --rc genhtml_branch_coverage=1 00:12:40.729 --rc genhtml_function_coverage=1 00:12:40.729 --rc genhtml_legend=1 00:12:40.729 --rc geninfo_all_blocks=1 00:12:40.729 --rc geninfo_unexecuted_blocks=1 00:12:40.729 00:12:40.729 ' 00:12:40.729 13:46:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.729 13:46:27 thread -- common/autotest_common.sh@10 -- # set +x 00:12:40.729 ************************************ 00:12:40.730 START TEST thread_poller_perf 00:12:40.730 ************************************ 00:12:40.730 13:46:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:40.730 [2024-11-04 13:46:27.570779] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:40.730 [2024-11-04 13:46:27.571273] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61377 ] 00:12:40.987 [2024-11-04 13:46:27.775537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.244 [2024-11-04 13:46:27.969799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.244 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:42.661 [2024-11-04T13:46:29.583Z] ====================================== 00:12:42.661 [2024-11-04T13:46:29.583Z] busy:2117798936 (cyc) 00:12:42.661 [2024-11-04T13:46:29.583Z] total_run_count: 291000 00:12:42.661 [2024-11-04T13:46:29.583Z] tsc_hz: 2100000000 (cyc) 00:12:42.661 [2024-11-04T13:46:29.583Z] ====================================== 00:12:42.661 [2024-11-04T13:46:29.583Z] poller_cost: 7277 (cyc), 3465 (nsec) 00:12:42.661 00:12:42.661 real 0m1.745s 00:12:42.661 user 0m1.495s 00:12:42.661 ************************************ 00:12:42.661 END TEST thread_poller_perf 00:12:42.661 ************************************ 00:12:42.661 sys 0m0.134s 00:12:42.661 13:46:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.661 13:46:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:42.661 13:46:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:42.661 13:46:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:12:42.661 13:46:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.661 13:46:29 thread -- common/autotest_common.sh@10 -- # set +x 00:12:42.661 ************************************ 00:12:42.661 START TEST thread_poller_perf 00:12:42.661 ************************************ 00:12:42.661 13:46:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:42.661 [2024-11-04 13:46:29.361641] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:42.661 [2024-11-04 13:46:29.362108] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61412 ] 00:12:42.661 [2024-11-04 13:46:29.566239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.923 [2024-11-04 13:46:29.753504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.923 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:44.324 [2024-11-04T13:46:31.246Z] ====================================== 00:12:44.324 [2024-11-04T13:46:31.246Z] busy:2105692510 (cyc) 00:12:44.324 [2024-11-04T13:46:31.246Z] total_run_count: 4171000 00:12:44.324 [2024-11-04T13:46:31.246Z] tsc_hz: 2100000000 (cyc) 00:12:44.324 [2024-11-04T13:46:31.246Z] ====================================== 00:12:44.324 [2024-11-04T13:46:31.246Z] poller_cost: 504 (cyc), 240 (nsec) 00:12:44.324 00:12:44.324 real 0m1.727s 00:12:44.324 user 0m1.489s 00:12:44.324 sys 0m0.127s 00:12:44.324 13:46:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.324 13:46:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 ************************************ 00:12:44.324 END TEST thread_poller_perf 00:12:44.324 ************************************ 00:12:44.324 13:46:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:44.324 ************************************ 00:12:44.324 END TEST thread 00:12:44.324 ************************************ 00:12:44.324 00:12:44.324 real 0m3.788s 00:12:44.324 user 0m3.146s 00:12:44.324 sys 0m0.414s 00:12:44.324 13:46:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.324 13:46:31 thread -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 13:46:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:44.324 13:46:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:44.324 13:46:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:44.324 13:46:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.324 13:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:44.324 ************************************ 00:12:44.324 START TEST app_cmdline 00:12:44.324 ************************************ 00:12:44.324 13:46:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:44.584 * Looking for test storage... 00:12:44.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.584 13:46:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:44.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.584 --rc genhtml_branch_coverage=1 00:12:44.584 --rc genhtml_function_coverage=1 00:12:44.584 --rc genhtml_legend=1 00:12:44.584 --rc geninfo_all_blocks=1 00:12:44.584 --rc geninfo_unexecuted_blocks=1 00:12:44.584 00:12:44.584 ' 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:44.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.584 --rc genhtml_branch_coverage=1 00:12:44.584 --rc genhtml_function_coverage=1 00:12:44.584 --rc genhtml_legend=1 00:12:44.584 --rc geninfo_all_blocks=1 00:12:44.584 --rc geninfo_unexecuted_blocks=1 00:12:44.584 00:12:44.584 ' 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:44.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.584 --rc genhtml_branch_coverage=1 00:12:44.584 --rc genhtml_function_coverage=1 00:12:44.584 --rc genhtml_legend=1 00:12:44.584 --rc geninfo_all_blocks=1 00:12:44.584 --rc geninfo_unexecuted_blocks=1 00:12:44.584 00:12:44.584 ' 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:44.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.584 --rc genhtml_branch_coverage=1 00:12:44.584 --rc genhtml_function_coverage=1 00:12:44.584 --rc genhtml_legend=1 00:12:44.584 --rc geninfo_all_blocks=1 00:12:44.584 --rc geninfo_unexecuted_blocks=1 00:12:44.584 00:12:44.584 ' 00:12:44.584 13:46:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:44.584 13:46:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61497 00:12:44.584 13:46:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61497 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61497 ']' 00:12:44.584 13:46:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.584 13:46:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:44.843 [2024-11-04 13:46:31.531537] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:44.843 [2024-11-04 13:46:31.532518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:12:44.843 [2024-11-04 13:46:31.741915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.101 [2024-11-04 13:46:31.912533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.479 13:46:33 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:46.479 13:46:33 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:12:46.479 13:46:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:46.754 { 00:12:46.754 "version": "SPDK v25.01-pre git sha1 1ca833860", 00:12:46.754 "fields": { 00:12:46.754 "major": 25, 00:12:46.754 "minor": 1, 00:12:46.754 "patch": 0, 00:12:46.754 "suffix": "-pre", 00:12:46.754 "commit": "1ca833860" 00:12:46.754 } 00:12:46.754 } 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:46.754 13:46:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:46.754 13:46:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:47.013 request: 00:12:47.013 { 00:12:47.013 "method": "env_dpdk_get_mem_stats", 00:12:47.013 "req_id": 1 00:12:47.013 } 00:12:47.013 Got JSON-RPC error response 00:12:47.013 response: 00:12:47.013 { 00:12:47.013 "code": -32601, 00:12:47.013 "message": "Method not found" 00:12:47.013 } 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.271 13:46:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61497 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61497 ']' 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61497 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61497 00:12:47.271 killing process with pid 61497 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61497' 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@971 -- # kill 61497 00:12:47.271 13:46:33 app_cmdline -- common/autotest_common.sh@976 -- # wait 61497 00:12:49.800 ************************************ 00:12:49.800 END TEST app_cmdline 00:12:49.800 00:12:49.800 real 0m5.530s 00:12:49.800 user 0m6.217s 00:12:49.800 sys 0m0.721s 00:12:49.800 13:46:36 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.801 13:46:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:49.801 ************************************ 00:12:50.059 13:46:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:50.059 13:46:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:50.059 13:46:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.059 13:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:50.059 ************************************ 00:12:50.059 START TEST version 00:12:50.059 ************************************ 00:12:50.059 13:46:36 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:50.059 * Looking for test storage... 00:12:50.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:50.059 13:46:36 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:50.059 13:46:36 version -- common/autotest_common.sh@1691 -- # lcov --version 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:50.060 13:46:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.060 13:46:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.060 13:46:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.060 13:46:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.060 13:46:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.060 13:46:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.060 13:46:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.060 13:46:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.060 13:46:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.060 13:46:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.060 13:46:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.060 13:46:36 version -- scripts/common.sh@344 -- # case "$op" in 00:12:50.060 13:46:36 version -- scripts/common.sh@345 -- # : 1 00:12:50.060 13:46:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.060 13:46:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.060 13:46:36 version -- scripts/common.sh@365 -- # decimal 1 00:12:50.060 13:46:36 version -- scripts/common.sh@353 -- # local d=1 00:12:50.060 13:46:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.060 13:46:36 version -- scripts/common.sh@355 -- # echo 1 00:12:50.060 13:46:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.060 13:46:36 version -- scripts/common.sh@366 -- # decimal 2 00:12:50.060 13:46:36 version -- scripts/common.sh@353 -- # local d=2 00:12:50.060 13:46:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.060 13:46:36 version -- scripts/common.sh@355 -- # echo 2 00:12:50.060 13:46:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.060 13:46:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.060 13:46:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.060 13:46:36 version -- scripts/common.sh@368 -- # return 0 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.060 --rc genhtml_branch_coverage=1 00:12:50.060 --rc genhtml_function_coverage=1 00:12:50.060 --rc genhtml_legend=1 00:12:50.060 --rc geninfo_all_blocks=1 00:12:50.060 --rc geninfo_unexecuted_blocks=1 00:12:50.060 00:12:50.060 ' 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.060 --rc genhtml_branch_coverage=1 00:12:50.060 --rc genhtml_function_coverage=1 00:12:50.060 --rc genhtml_legend=1 00:12:50.060 --rc geninfo_all_blocks=1 00:12:50.060 --rc geninfo_unexecuted_blocks=1 00:12:50.060 00:12:50.060 ' 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.060 --rc genhtml_branch_coverage=1 00:12:50.060 --rc genhtml_function_coverage=1 00:12:50.060 --rc genhtml_legend=1 00:12:50.060 --rc geninfo_all_blocks=1 00:12:50.060 --rc geninfo_unexecuted_blocks=1 00:12:50.060 00:12:50.060 ' 00:12:50.060 13:46:36 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.060 --rc genhtml_branch_coverage=1 00:12:50.060 --rc genhtml_function_coverage=1 00:12:50.060 --rc genhtml_legend=1 00:12:50.060 --rc geninfo_all_blocks=1 00:12:50.060 --rc geninfo_unexecuted_blocks=1 00:12:50.060 00:12:50.060 ' 00:12:50.060 13:46:36 version -- app/version.sh@17 -- # get_header_version major 00:12:50.060 13:46:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # cut -f2 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # tr -d '"' 00:12:50.060 13:46:36 version -- app/version.sh@17 -- # major=25 00:12:50.060 13:46:36 version -- app/version.sh@18 -- # get_header_version minor 00:12:50.060 13:46:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # cut -f2 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # tr -d '"' 00:12:50.060 13:46:36 version -- app/version.sh@18 -- # minor=1 00:12:50.060 13:46:36 version -- app/version.sh@19 -- # get_header_version patch 00:12:50.060 13:46:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # tr -d '"' 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # cut -f2 00:12:50.060 13:46:36 version -- app/version.sh@19 -- # patch=0 00:12:50.060 13:46:36 version -- app/version.sh@20 -- # get_header_version suffix 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # cut -f2 00:12:50.060 13:46:36 version -- app/version.sh@14 -- # tr -d '"' 00:12:50.060 13:46:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:50.060 13:46:36 version -- app/version.sh@20 -- # suffix=-pre 00:12:50.060 13:46:36 version -- app/version.sh@22 -- # version=25.1 00:12:50.060 13:46:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:50.060 13:46:36 version -- app/version.sh@28 -- # version=25.1rc0 00:12:50.060 13:46:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:50.060 13:46:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:50.319 13:46:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:50.319 13:46:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:50.319 00:12:50.319 real 0m0.280s 00:12:50.319 user 0m0.170s 00:12:50.319 sys 0m0.153s 00:12:50.319 ************************************ 00:12:50.319 END TEST version 00:12:50.319 ************************************ 00:12:50.319 13:46:37 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.319 13:46:37 version -- common/autotest_common.sh@10 -- # set +x 00:12:50.319 13:46:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:50.319 13:46:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:50.319 13:46:37 -- spdk/autotest.sh@194 -- # uname -s 00:12:50.319 13:46:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:50.319 13:46:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:50.319 13:46:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:50.319 13:46:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:50.319 13:46:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:50.319 13:46:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:50.319 13:46:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.319 13:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:50.319 ************************************ 00:12:50.319 START TEST blockdev_nvme 00:12:50.319 ************************************ 00:12:50.319 13:46:37 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:50.319 * Looking for test storage... 00:12:50.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:50.319 13:46:37 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:50.319 13:46:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:50.319 13:46:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.578 13:46:37 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:50.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.578 --rc genhtml_branch_coverage=1 00:12:50.578 --rc genhtml_function_coverage=1 00:12:50.578 --rc genhtml_legend=1 00:12:50.578 --rc geninfo_all_blocks=1 00:12:50.578 --rc geninfo_unexecuted_blocks=1 00:12:50.578 00:12:50.578 ' 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:50.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.578 --rc genhtml_branch_coverage=1 00:12:50.578 --rc genhtml_function_coverage=1 00:12:50.578 --rc genhtml_legend=1 00:12:50.578 --rc geninfo_all_blocks=1 00:12:50.578 --rc geninfo_unexecuted_blocks=1 00:12:50.578 00:12:50.578 ' 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:50.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.578 --rc genhtml_branch_coverage=1 00:12:50.578 --rc genhtml_function_coverage=1 00:12:50.578 --rc genhtml_legend=1 00:12:50.578 --rc geninfo_all_blocks=1 00:12:50.578 --rc geninfo_unexecuted_blocks=1 00:12:50.578 00:12:50.578 ' 00:12:50.578 13:46:37 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:50.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.578 --rc genhtml_branch_coverage=1 00:12:50.578 --rc genhtml_function_coverage=1 00:12:50.578 --rc genhtml_legend=1 00:12:50.578 --rc geninfo_all_blocks=1 00:12:50.578 --rc geninfo_unexecuted_blocks=1 00:12:50.578 00:12:50.578 ' 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:50.578 13:46:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:12:50.578 13:46:37 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:50.579 13:46:37 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61702 00:12:50.579 13:46:37 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:50.579 13:46:37 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:50.579 13:46:37 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61702 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61702 ']' 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:50.579 13:46:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.579 [2024-11-04 13:46:37.470181] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:50.579 [2024-11-04 13:46:37.470962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61702 ] 00:12:50.836 [2024-11-04 13:46:37.678011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.095 [2024-11-04 13:46:37.853653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.030 13:46:38 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.030 13:46:38 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:12:52.030 13:46:38 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:52.030 13:46:38 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:12:52.030 13:46:38 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:52.030 13:46:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:52.030 13:46:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:52.288 13:46:39 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:52.288 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.288 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:52.546 13:46:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.546 13:46:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.804 13:46:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.804 13:46:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:52.804 13:46:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:52.805 13:46:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f1281400-9124-4dc0-9bc1-952753bee380"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f1281400-9124-4dc0-9bc1-952753bee380",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "aa060ab5-9b03-43e9-8121-3625cdbf595d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aa060ab5-9b03-43e9-8121-3625cdbf595d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ab19ab91-ede8-4784-84c2-8c6e2f43e5c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab19ab91-ede8-4784-84c2-8c6e2f43e5c9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "56a8b723-e742-4819-a493-92a28b659e5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56a8b723-e742-4819-a493-92a28b659e5f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7c8ae891-3c22-4f36-a360-db2928160f36"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c8ae891-3c22-4f36-a360-db2928160f36",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8c8818b4-30d3-4265-aaf8-da58ba4e38a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8c8818b4-30d3-4265-aaf8-da58ba4e38a5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:52.805 13:46:39 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:52.805 13:46:39 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:12:52.805 13:46:39 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:52.805 13:46:39 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61702 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61702 ']' 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61702 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61702 00:12:52.805 killing process with pid 61702 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61702' 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61702 00:12:52.805 13:46:39 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61702 00:12:55.366 13:46:42 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:55.366 13:46:42 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:55.366 13:46:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:55.366 13:46:42 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.366 13:46:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.366 ************************************ 00:12:55.366 START TEST bdev_hello_world 00:12:55.366 ************************************ 00:12:55.366 13:46:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:55.624 [2024-11-04 13:46:42.358490] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:55.624 [2024-11-04 13:46:42.359009] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:12:55.882 [2024-11-04 13:46:42.554805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.882 [2024-11-04 13:46:42.681192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.448 [2024-11-04 13:46:43.351127] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:56.448 [2024-11-04 13:46:43.351189] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:56.448 [2024-11-04 13:46:43.351233] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:56.448 [2024-11-04 13:46:43.354612] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:56.448 [2024-11-04 13:46:43.355048] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:56.448 [2024-11-04 13:46:43.355076] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:56.448 [2024-11-04 13:46:43.355263] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:56.448 00:12:56.448 [2024-11-04 13:46:43.355293] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:57.844 ************************************ 00:12:57.844 END TEST bdev_hello_world 00:12:57.844 ************************************ 00:12:57.844 00:12:57.844 real 0m2.336s 00:12:57.844 user 0m1.917s 00:12:57.844 sys 0m0.308s 00:12:57.844 13:46:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.844 13:46:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:57.844 13:46:44 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:57.844 13:46:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:57.844 13:46:44 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:57.844 13:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.844 ************************************ 00:12:57.844 START TEST bdev_bounds 00:12:57.844 ************************************ 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:57.844 Process bdevio pid: 61850 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61850 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61850' 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61850 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61850 ']' 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.844 13:46:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:57.844 [2024-11-04 13:46:44.751901] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:12:57.844 [2024-11-04 13:46:44.752094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61850 ] 00:12:58.122 [2024-11-04 13:46:44.946728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.380 [2024-11-04 13:46:45.084351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.380 [2024-11-04 13:46:45.084471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.380 [2024-11-04 13:46:45.084501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.947 13:46:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.947 13:46:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:58.947 13:46:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:59.205 I/O targets: 00:12:59.205 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:59.205 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:59.205 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.205 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.205 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.205 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:59.205 00:12:59.205 00:12:59.205 CUnit - A unit testing framework for C - Version 2.1-3 00:12:59.205 http://cunit.sourceforge.net/ 00:12:59.205 00:12:59.205 00:12:59.205 Suite: bdevio tests on: Nvme3n1 00:12:59.205 Test: blockdev write read block ...passed 00:12:59.205 Test: blockdev write zeroes read block ...passed 00:12:59.205 Test: blockdev write zeroes read no split ...passed 00:12:59.205 Test: blockdev write zeroes read split ...passed 00:12:59.205 Test: blockdev write zeroes read split partial ...passed 00:12:59.205 Test: blockdev reset ...[2024-11-04 13:46:46.010494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:59.205 [2024-11-04 13:46:46.015189] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:12:59.205 Test: blockdev write read 8 blocks ...uccessful. 00:12:59.205 passed 00:12:59.205 Test: blockdev write read size > 128k ...passed 00:12:59.205 Test: blockdev write read invalid size ...passed 00:12:59.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.205 Test: blockdev write read max offset ...passed 00:12:59.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.205 Test: blockdev writev readv 8 blocks ...passed 00:12:59.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.205 Test: blockdev writev readv block ...passed 00:12:59.205 Test: blockdev writev readv size > 128k ...passed 00:12:59.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.205 Test: blockdev comparev and writev ...[2024-11-04 13:46:46.025401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae20a000 len:0x1000 00:12:59.205 [2024-11-04 13:46:46.025459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.205 passed 00:12:59.205 Test: blockdev nvme passthru rw ...passed 00:12:59.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.205 Test: blockdev nvme admin passthru ...[2024-11-04 13:46:46.026217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.205 [2024-11-04 13:46:46.026267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.205 passed 00:12:59.205 Test: blockdev copy ...passed 00:12:59.205 Suite: bdevio tests on: Nvme2n3 00:12:59.205 Test: blockdev write read block ...passed 00:12:59.205 Test: blockdev write zeroes read block ...passed 00:12:59.205 Test: blockdev write zeroes read no split ...passed 00:12:59.205 Test: blockdev write zeroes read split ...passed 00:12:59.205 Test: blockdev write zeroes read split partial ...passed 00:12:59.205 Test: blockdev reset ...[2024-11-04 13:46:46.109371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:59.205 [2024-11-04 13:46:46.114043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:59.205 passed 00:12:59.205 Test: blockdev write read 8 blocks ...passed 00:12:59.205 Test: blockdev write read size > 128k ...passed 00:12:59.205 Test: blockdev write read invalid size ...passed 00:12:59.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.205 Test: blockdev write read max offset ...passed 00:12:59.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.206 Test: blockdev writev readv 8 blocks ...passed 00:12:59.206 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.206 Test: blockdev writev readv block ...passed 00:12:59.206 Test: blockdev writev readv size > 128k ...passed 00:12:59.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.206 Test: blockdev comparev and writev ...[2024-11-04 13:46:46.123925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291406000 len:0x1000 00:12:59.206 [2024-11-04 13:46:46.123985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.206 passed 00:12:59.206 Test: blockdev nvme passthru rw ...passed 00:12:59.206 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:46:46.124802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.206 [2024-11-04 13:46:46.124844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.206 passed 00:12:59.464 Test: blockdev nvme admin passthru ...passed 00:12:59.464 Test: blockdev copy ...passed 00:12:59.464 Suite: bdevio tests on: Nvme2n2 00:12:59.464 Test: blockdev write read block ...passed 00:12:59.464 Test: blockdev write zeroes read block ...passed 00:12:59.464 Test: blockdev write zeroes read no split ...passed 00:12:59.464 Test: blockdev write zeroes read split ...passed 00:12:59.464 Test: blockdev write zeroes read split partial ...passed 00:12:59.464 Test: blockdev reset ...[2024-11-04 13:46:46.205444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:59.464 [2024-11-04 13:46:46.210326] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:12:59.464 Test: blockdev write read 8 blocks ...uccessful. 00:12:59.464 passed 00:12:59.464 Test: blockdev write read size > 128k ...passed 00:12:59.464 Test: blockdev write read invalid size ...passed 00:12:59.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.464 Test: blockdev write read max offset ...passed 00:12:59.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.464 Test: blockdev writev readv 8 blocks ...passed 00:12:59.464 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.464 Test: blockdev writev readv block ...passed 00:12:59.464 Test: blockdev writev readv size > 128k ...passed 00:12:59.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.464 Test: blockdev comparev and writev ...[2024-11-04 13:46:46.219437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9a3c000 len:0x1000 00:12:59.464 [2024-11-04 13:46:46.219508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.464 passed 00:12:59.464 Test: blockdev nvme passthru rw ...passed 00:12:59.464 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.464 Test: blockdev nvme admin passthru ...[2024-11-04 13:46:46.220477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.464 [2024-11-04 13:46:46.220524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.464 passed 00:12:59.464 Test: blockdev copy ...passed 00:12:59.464 Suite: bdevio tests on: Nvme2n1 00:12:59.464 Test: blockdev write read block ...passed 00:12:59.464 Test: blockdev write zeroes read block ...passed 00:12:59.464 Test: blockdev write zeroes read no split ...passed 00:12:59.464 Test: blockdev write zeroes read split ...passed 00:12:59.464 Test: blockdev write zeroes read split partial ...passed 00:12:59.464 Test: blockdev reset ...[2024-11-04 13:46:46.315892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:59.464 passed 00:12:59.464 Test: blockdev write read 8 blocks ...[2024-11-04 13:46:46.320771] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:59.464 passed 00:12:59.464 Test: blockdev write read size > 128k ...passed 00:12:59.464 Test: blockdev write read invalid size ...passed 00:12:59.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.464 Test: blockdev write read max offset ...passed 00:12:59.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.464 Test: blockdev writev readv 8 blocks ...passed 00:12:59.464 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.464 Test: blockdev writev readv block ...passed 00:12:59.464 Test: blockdev writev readv size > 128k ...passed 00:12:59.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.464 Test: blockdev comparev and writev ...[2024-11-04 13:46:46.330890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:12:59.464 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9a38000 len:0x1000 00:12:59.464 [2024-11-04 13:46:46.331082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.464 passed 00:12:59.464 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:46:46.332087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.464 [2024-11-04 13:46:46.332126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.464 passed 00:12:59.464 Test: blockdev nvme admin passthru ...passed 00:12:59.464 Test: blockdev copy ...passed 00:12:59.464 Suite: bdevio tests on: Nvme1n1 00:12:59.464 Test: blockdev write read block ...passed 00:12:59.464 Test: blockdev write zeroes read block ...passed 00:12:59.464 Test: blockdev write zeroes read no split ...passed 00:12:59.722 Test: blockdev write zeroes read split ...passed 00:12:59.722 Test: blockdev write zeroes read split partial ...passed 00:12:59.722 Test: blockdev reset ...[2024-11-04 13:46:46.441142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:59.722 [2024-11-04 13:46:46.445206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:59.722 passed 00:12:59.722 Test: blockdev write read 8 blocks ...passed 00:12:59.722 Test: blockdev write read size > 128k ...passed 00:12:59.722 Test: blockdev write read invalid size ...passed 00:12:59.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.722 Test: blockdev write read max offset ...passed 00:12:59.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.722 Test: blockdev writev readv 8 blocks ...passed 00:12:59.722 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.722 Test: blockdev writev readv block ...passed 00:12:59.722 Test: blockdev writev readv size > 128k ...passed 00:12:59.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.722 Test: blockdev comparev and writev ...[2024-11-04 13:46:46.455045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:12:59.722 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9a34000 len:0x1000 00:12:59.722 [2024-11-04 13:46:46.455237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:59.722 passed 00:12:59.722 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.722 Test: blockdev nvme admin passthru ...[2024-11-04 13:46:46.456146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:59.722 [2024-11-04 13:46:46.456196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:59.722 passed 00:12:59.722 Test: blockdev copy ...passed 00:12:59.722 Suite: bdevio tests on: Nvme0n1 00:12:59.722 Test: blockdev write read block ...passed 00:12:59.722 Test: blockdev write zeroes read block ...passed 00:12:59.722 Test: blockdev write zeroes read no split ...passed 00:12:59.722 Test: blockdev write zeroes read split ...passed 00:12:59.722 Test: blockdev write zeroes read split partial ...passed 00:12:59.722 Test: blockdev reset ...[2024-11-04 13:46:46.563784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:59.722 [2024-11-04 13:46:46.569072] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:59.722 passed 00:12:59.722 Test: blockdev write read 8 blocks ...passed 00:12:59.722 Test: blockdev write read size > 128k ...passed 00:12:59.722 Test: blockdev write read invalid size ...passed 00:12:59.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.722 Test: blockdev write read max offset ...passed 00:12:59.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.722 Test: blockdev writev readv 8 blocks ...passed 00:12:59.722 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.722 Test: blockdev writev readv block ...passed 00:12:59.722 Test: blockdev writev readv size > 128k ...passed 00:12:59.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.722 Test: blockdev comparev and writev ...passed 00:12:59.722 Test: blockdev nvme passthru rw ...[2024-11-04 13:46:46.578766] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:59.722 separate metadata which is not supported yet. 00:12:59.722 passed 00:12:59.722 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:46:46.579316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:59.722 [2024-11-04 13:46:46.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:59.722 passed 00:12:59.722 Test: blockdev nvme admin passthru ...passed 00:12:59.722 Test: blockdev copy ...passed 00:12:59.722 00:12:59.722 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.722 suites 6 6 n/a 0 0 00:12:59.722 tests 138 138 138 0 0 00:12:59.722 asserts 893 893 893 0 n/a 00:12:59.722 00:12:59.722 Elapsed time = 1.799 seconds 00:12:59.722 0 00:12:59.722 13:46:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61850 00:12:59.722 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61850 ']' 00:12:59.722 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61850 00:12:59.722 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:59.722 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61850 00:12:59.723 killing process with pid 61850 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61850' 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61850 00:12:59.723 13:46:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61850 00:13:01.094 ************************************ 00:13:01.094 END TEST bdev_bounds 00:13:01.094 ************************************ 00:13:01.094 13:46:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:01.094 00:13:01.094 real 0m3.288s 00:13:01.094 user 0m8.221s 00:13:01.094 sys 0m0.448s 00:13:01.094 13:46:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.094 13:46:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 13:46:47 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:01.094 13:46:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:01.094 13:46:47 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.094 13:46:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 ************************************ 00:13:01.094 START TEST bdev_nbd 00:13:01.094 ************************************ 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61921 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61921 /var/tmp/spdk-nbd.sock 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61921 ']' 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.094 13:46:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:01.351 [2024-11-04 13:46:48.091747] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:01.351 [2024-11-04 13:46:48.092248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.608 [2024-11-04 13:46:48.292399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.608 [2024-11-04 13:46:48.478410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.540 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.804 1+0 records in 00:13:02.804 1+0 records out 00:13:02.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695112 s, 5.9 MB/s 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:02.804 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.805 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.805 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:03.064 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.322 1+0 records in 00:13:03.322 1+0 records out 00:13:03.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542017 s, 7.6 MB/s 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.322 13:46:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:03.322 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.581 1+0 records in 00:13:03.581 1+0 records out 00:13:03.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524594 s, 7.8 MB/s 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:03.581 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:03.840 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:03.840 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:03.840 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:03.840 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.840 1+0 records in 00:13:03.840 1+0 records out 00:13:03.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779204 s, 5.3 MB/s 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:03.841 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.099 1+0 records in 00:13:04.099 1+0 records out 00:13:04.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679499 s, 6.0 MB/s 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.099 13:46:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.357 1+0 records in 00:13:04.357 1+0 records out 00:13:04.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648266 s, 6.3 MB/s 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.357 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.616 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd0", 00:13:04.616 "bdev_name": "Nvme0n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd1", 00:13:04.616 "bdev_name": "Nvme1n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd2", 00:13:04.616 "bdev_name": "Nvme2n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd3", 00:13:04.616 "bdev_name": "Nvme2n2" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd4", 00:13:04.616 "bdev_name": "Nvme2n3" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd5", 00:13:04.616 "bdev_name": "Nvme3n1" 00:13:04.616 } 00:13:04.616 ]' 00:13:04.616 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:04.616 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd0", 00:13:04.616 "bdev_name": "Nvme0n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd1", 00:13:04.616 "bdev_name": "Nvme1n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd2", 00:13:04.616 "bdev_name": "Nvme2n1" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd3", 00:13:04.616 "bdev_name": "Nvme2n2" 00:13:04.616 }, 00:13:04.616 { 00:13:04.616 "nbd_device": "/dev/nbd4", 00:13:04.616 "bdev_name": "Nvme2n3" 00:13:04.617 }, 00:13:04.617 { 00:13:04.617 "nbd_device": "/dev/nbd5", 00:13:04.617 "bdev_name": "Nvme3n1" 00:13:04.617 } 00:13:04.617 ]' 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.617 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.875 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.134 13:46:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.392 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.649 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.909 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.167 13:46:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.426 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:06.689 /dev/nbd0 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:06.689 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.690 1+0 records in 00:13:06.690 1+0 records out 00:13:06.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451773 s, 9.1 MB/s 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.690 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:13:06.948 /dev/nbd1 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.948 1+0 records in 00:13:06.948 1+0 records out 00:13:06.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517695 s, 7.9 MB/s 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.948 13:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:13:07.207 /dev/nbd10 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.207 1+0 records in 00:13:07.207 1+0 records out 00:13:07.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616129 s, 6.6 MB/s 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.207 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:13:07.464 /dev/nbd11 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.723 1+0 records in 00:13:07.723 1+0 records out 00:13:07.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556387 s, 7.4 MB/s 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.723 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:13:07.981 /dev/nbd12 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.981 1+0 records in 00:13:07.981 1+0 records out 00:13:07.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612323 s, 6.7 MB/s 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.981 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:13:08.239 /dev/nbd13 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.239 1+0 records in 00:13:08.239 1+0 records out 00:13:08.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122015 s, 3.4 MB/s 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:08.239 13:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.239 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.496 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:08.496 { 00:13:08.496 "nbd_device": "/dev/nbd0", 00:13:08.496 "bdev_name": "Nvme0n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd1", 00:13:08.497 "bdev_name": "Nvme1n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd10", 00:13:08.497 "bdev_name": "Nvme2n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd11", 00:13:08.497 "bdev_name": "Nvme2n2" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd12", 00:13:08.497 "bdev_name": "Nvme2n3" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd13", 00:13:08.497 "bdev_name": "Nvme3n1" 00:13:08.497 } 00:13:08.497 ]' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd0", 00:13:08.497 "bdev_name": "Nvme0n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd1", 00:13:08.497 "bdev_name": "Nvme1n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd10", 00:13:08.497 "bdev_name": "Nvme2n1" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd11", 00:13:08.497 "bdev_name": "Nvme2n2" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd12", 00:13:08.497 "bdev_name": "Nvme2n3" 00:13:08.497 }, 00:13:08.497 { 00:13:08.497 "nbd_device": "/dev/nbd13", 00:13:08.497 "bdev_name": "Nvme3n1" 00:13:08.497 } 00:13:08.497 ]' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:08.497 /dev/nbd1 00:13:08.497 /dev/nbd10 00:13:08.497 /dev/nbd11 00:13:08.497 /dev/nbd12 00:13:08.497 /dev/nbd13' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:08.497 /dev/nbd1 00:13:08.497 /dev/nbd10 00:13:08.497 /dev/nbd11 00:13:08.497 /dev/nbd12 00:13:08.497 /dev/nbd13' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:08.497 256+0 records in 00:13:08.497 256+0 records out 00:13:08.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610042 s, 172 MB/s 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.497 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:08.754 256+0 records in 00:13:08.754 256+0 records out 00:13:08.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126338 s, 8.3 MB/s 00:13:08.755 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.755 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:08.755 256+0 records in 00:13:08.755 256+0 records out 00:13:08.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134432 s, 7.8 MB/s 00:13:08.755 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.755 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:09.012 256+0 records in 00:13:09.012 256+0 records out 00:13:09.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136819 s, 7.7 MB/s 00:13:09.012 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.012 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:09.012 256+0 records in 00:13:09.012 256+0 records out 00:13:09.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136869 s, 7.7 MB/s 00:13:09.012 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.012 13:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:09.270 256+0 records in 00:13:09.270 256+0 records out 00:13:09.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139405 s, 7.5 MB/s 00:13:09.270 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.270 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:09.528 256+0 records in 00:13:09.528 256+0 records out 00:13:09.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140706 s, 7.5 MB/s 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.528 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.529 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.786 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.044 13:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.303 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.869 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.870 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.870 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.129 13:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.388 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:11.646 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:12.212 malloc_lvol_verify 00:13:12.212 13:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:12.212 c000f608-2821-41b2-9493-12332ace207a 00:13:12.470 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:12.470 8a0899f4-8d84-426d-ad57-00ad4fc0d827 00:13:12.471 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:12.731 /dev/nbd0 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:12.731 mke2fs 1.47.0 (5-Feb-2023) 00:13:12.731 Discarding device blocks: 0/4096 done 00:13:12.731 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:12.731 00:13:12.731 Allocating group tables: 0/1 done 00:13:12.731 Writing inode tables: 0/1 done 00:13:12.731 Creating journal (1024 blocks): done 00:13:12.731 Writing superblocks and filesystem accounting information: 0/1 done 00:13:12.731 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.731 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.989 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61921 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61921 ']' 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61921 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61921 00:13:12.990 killing process with pid 61921 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61921' 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61921 00:13:12.990 13:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61921 00:13:14.892 ************************************ 00:13:14.892 END TEST bdev_nbd 00:13:14.892 ************************************ 00:13:14.892 13:47:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:14.892 00:13:14.892 real 0m13.360s 00:13:14.892 user 0m17.951s 00:13:14.892 sys 0m5.226s 00:13:14.892 13:47:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.892 13:47:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:14.892 13:47:01 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:14.892 13:47:01 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:13:14.892 skipping fio tests on NVMe due to multi-ns failures. 00:13:14.892 13:47:01 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:14.892 13:47:01 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:14.892 13:47:01 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:14.892 13:47:01 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:14.892 13:47:01 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.892 13:47:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.892 ************************************ 00:13:14.892 START TEST bdev_verify 00:13:14.892 ************************************ 00:13:14.893 13:47:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:14.893 [2024-11-04 13:47:01.509177] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:14.893 [2024-11-04 13:47:01.509356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:13:14.893 [2024-11-04 13:47:01.713245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.151 [2024-11-04 13:47:01.891275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.151 [2024-11-04 13:47:01.891284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.087 Running I/O for 5 seconds... 00:13:18.424 18624.00 IOPS, 72.75 MiB/s [2024-11-04T13:47:06.278Z] 18048.00 IOPS, 70.50 MiB/s [2024-11-04T13:47:07.210Z] 18368.00 IOPS, 71.75 MiB/s [2024-11-04T13:47:08.143Z] 17936.00 IOPS, 70.06 MiB/s [2024-11-04T13:47:08.143Z] 17651.20 IOPS, 68.95 MiB/s 00:13:21.221 Latency(us) 00:13:21.221 [2024-11-04T13:47:08.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.221 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0x0 length 0xbd0bd 00:13:21.221 Nvme0n1 : 5.06 1443.23 5.64 0.00 0.00 88363.12 17850.76 84884.72 00:13:21.221 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:21.221 Nvme0n1 : 5.07 1464.89 5.72 0.00 0.00 86736.58 16976.94 77394.90 00:13:21.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0x0 length 0xa0000 00:13:21.221 Nvme1n1 : 5.06 1442.82 5.64 0.00 0.00 88208.28 20347.37 82388.11 00:13:21.221 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0xa0000 length 0xa0000 00:13:21.221 Nvme1n1 : 5.07 1463.55 5.72 0.00 0.00 86648.56 17476.27 81389.47 00:13:21.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0x0 length 0x80000 00:13:21.221 Nvme2n1 : 5.06 1442.37 5.63 0.00 0.00 88116.55 22344.66 79891.50 00:13:21.221 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0x80000 length 0x80000 00:13:21.221 Nvme2n1 : 5.08 1462.64 5.71 0.00 0.00 86533.60 11796.48 83386.76 00:13:21.221 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.221 Verification LBA range: start 0x0 length 0x80000 00:13:21.222 Nvme2n2 : 5.07 1451.53 5.67 0.00 0.00 87445.89 4868.39 79891.50 00:13:21.222 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.222 Verification LBA range: start 0x80000 length 0x80000 00:13:21.222 Nvme2n2 : 5.06 1466.61 5.73 0.00 0.00 87063.88 17601.10 82887.44 00:13:21.222 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.222 Verification LBA range: start 0x0 length 0x80000 00:13:21.222 Nvme2n3 : 5.07 1451.05 5.67 0.00 0.00 87307.05 4962.01 80890.15 00:13:21.222 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.222 Verification LBA range: start 0x80000 length 0x80000 00:13:21.222 Nvme2n3 : 5.06 1466.10 5.73 0.00 0.00 86949.77 17226.61 77894.22 00:13:21.222 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.222 Verification LBA range: start 0x0 length 0x20000 00:13:21.222 Nvme3n1 : 5.08 1450.22 5.66 0.00 0.00 87213.83 6678.43 82887.44 00:13:21.222 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.222 Verification LBA range: start 0x20000 length 0x20000 00:13:21.222 Nvme3n1 : 5.07 1465.51 5.72 0.00 0.00 86822.18 16976.94 73899.64 00:13:21.222 [2024-11-04T13:47:08.144Z] =================================================================================================================== 00:13:21.222 [2024-11-04T13:47:08.144Z] Total : 17470.51 68.24 0.00 0.00 87279.92 4868.39 84884.72 00:13:23.119 00:13:23.119 real 0m8.229s 00:13:23.119 user 0m14.975s 00:13:23.119 sys 0m0.353s 00:13:23.119 13:47:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.119 13:47:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.119 ************************************ 00:13:23.119 END TEST bdev_verify 00:13:23.119 ************************************ 00:13:23.119 13:47:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.119 13:47:09 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:23.119 13:47:09 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.119 13:47:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.119 ************************************ 00:13:23.119 START TEST bdev_verify_big_io 00:13:23.119 ************************************ 00:13:23.119 13:47:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.119 [2024-11-04 13:47:09.790785] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:23.119 [2024-11-04 13:47:09.791047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62440 ] 00:13:23.119 [2024-11-04 13:47:09.994322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.376 [2024-11-04 13:47:10.188821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.376 [2024-11-04 13:47:10.188825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.313 Running I/O for 5 seconds... 00:13:29.442 1728.00 IOPS, 108.00 MiB/s [2024-11-04T13:47:17.299Z] 2878.50 IOPS, 179.91 MiB/s [2024-11-04T13:47:17.299Z] 2730.33 IOPS, 170.65 MiB/s 00:13:30.377 Latency(us) 00:13:30.377 [2024-11-04T13:47:17.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.377 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0x0 length 0xbd0b 00:13:30.377 Nvme0n1 : 5.63 124.95 7.81 0.00 0.00 982100.09 13481.69 1102502.77 00:13:30.377 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:30.377 Nvme0n1 : 5.63 130.66 8.17 0.00 0.00 948856.97 30583.47 982665.51 00:13:30.377 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0x0 length 0xa000 00:13:30.377 Nvme1n1 : 5.78 122.73 7.67 0.00 0.00 963028.57 99365.06 1166415.97 00:13:30.377 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0xa000 length 0xa000 00:13:30.377 Nvme1n1 : 5.73 129.49 8.09 0.00 0.00 915843.86 107354.21 890790.28 00:13:30.377 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0x0 length 0x8000 00:13:30.377 Nvme2n1 : 5.88 128.49 8.03 0.00 0.00 901799.17 75896.93 1182394.27 00:13:30.377 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0x8000 length 0x8000 00:13:30.377 Nvme2n1 : 5.73 133.93 8.37 0.00 0.00 871074.54 96369.13 902774.00 00:13:30.377 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.377 Verification LBA range: start 0x0 length 0x8000 00:13:30.378 Nvme2n2 : 5.90 134.65 8.42 0.00 0.00 842684.79 18849.40 1214350.87 00:13:30.378 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.378 Verification LBA range: start 0x8000 length 0x8000 00:13:30.378 Nvme2n2 : 5.82 136.47 8.53 0.00 0.00 826592.83 80390.83 934730.61 00:13:30.378 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.378 Verification LBA range: start 0x0 length 0x8000 00:13:30.378 Nvme2n3 : 5.91 134.08 8.38 0.00 0.00 815143.40 18849.40 1741634.80 00:13:30.378 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.378 Verification LBA range: start 0x8000 length 0x8000 00:13:30.378 Nvme2n3 : 5.87 147.27 9.20 0.00 0.00 754551.97 10735.42 974676.36 00:13:30.378 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:30.378 Verification LBA range: start 0x0 length 0x2000 00:13:30.378 Nvme3n1 : 5.95 154.22 9.64 0.00 0.00 688820.06 3635.69 1517938.59 00:13:30.378 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:30.378 Verification LBA range: start 0x2000 length 0x2000 00:13:30.378 Nvme3n1 : 5.87 152.56 9.53 0.00 0.00 715480.26 3245.59 1006632.96 00:13:30.378 [2024-11-04T13:47:17.300Z] =================================================================================================================== 00:13:30.378 [2024-11-04T13:47:17.300Z] Total : 1629.51 101.84 0.00 0.00 844426.79 3245.59 1741634.80 00:13:32.284 00:13:32.284 real 0m9.318s 00:13:32.284 user 0m17.200s 00:13:32.284 sys 0m0.348s 00:13:32.284 13:47:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.284 13:47:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.284 ************************************ 00:13:32.284 END TEST bdev_verify_big_io 00:13:32.285 ************************************ 00:13:32.285 13:47:19 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.285 13:47:19 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:32.285 13:47:19 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.285 13:47:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.285 ************************************ 00:13:32.285 START TEST bdev_write_zeroes 00:13:32.285 ************************************ 00:13:32.285 13:47:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.285 [2024-11-04 13:47:19.162519] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:32.285 [2024-11-04 13:47:19.162753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62560 ] 00:13:32.543 [2024-11-04 13:47:19.369014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.801 [2024-11-04 13:47:19.545959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.366 Running I/O for 1 seconds... 00:13:34.739 44006.00 IOPS, 171.90 MiB/s 00:13:34.739 Latency(us) 00:13:34.739 [2024-11-04T13:47:21.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.739 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme0n1 : 1.16 6325.56 24.71 0.00 0.00 19692.61 7833.11 243669.09 00:13:34.739 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme1n1 : 1.15 6501.58 25.40 0.00 0.00 19624.79 11921.31 165774.87 00:13:34.739 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme2n1 : 1.15 6492.27 25.36 0.00 0.00 19600.17 12046.14 165774.87 00:13:34.739 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme2n2 : 1.15 6483.21 25.33 0.00 0.00 19557.68 11484.40 166773.52 00:13:34.739 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme2n3 : 1.16 6474.11 25.29 0.00 0.00 19556.98 12170.97 167772.16 00:13:34.739 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.739 Nvme3n1 : 1.16 6465.47 25.26 0.00 0.00 19555.28 11234.74 167772.16 00:13:34.739 [2024-11-04T13:47:21.661Z] =================================================================================================================== 00:13:34.739 [2024-11-04T13:47:21.661Z] Total : 38742.20 151.34 0.00 0.00 19597.59 7833.11 243669.09 00:13:36.662 00:13:36.662 real 0m4.272s 00:13:36.662 user 0m3.802s 00:13:36.662 sys 0m0.338s 00:13:36.662 13:47:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:36.662 13:47:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:36.662 ************************************ 00:13:36.662 END TEST bdev_write_zeroes 00:13:36.662 ************************************ 00:13:36.662 13:47:23 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.662 13:47:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:36.662 13:47:23 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:36.662 13:47:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.662 ************************************ 00:13:36.662 START TEST bdev_json_nonenclosed 00:13:36.662 ************************************ 00:13:36.662 13:47:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.662 [2024-11-04 13:47:23.508525] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:36.662 [2024-11-04 13:47:23.508727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62624 ] 00:13:36.920 [2024-11-04 13:47:23.721507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.178 [2024-11-04 13:47:23.914107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.178 [2024-11-04 13:47:23.914244] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:37.178 [2024-11-04 13:47:23.914292] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:37.178 [2024-11-04 13:47:23.914316] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.437 00:13:37.437 real 0m0.867s 00:13:37.437 user 0m0.578s 00:13:37.437 sys 0m0.181s 00:13:37.437 13:47:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.437 ************************************ 00:13:37.437 END TEST bdev_json_nonenclosed 00:13:37.437 13:47:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:37.437 ************************************ 00:13:37.437 13:47:24 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.437 13:47:24 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:37.437 13:47:24 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.437 13:47:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.437 ************************************ 00:13:37.437 START TEST bdev_json_nonarray 00:13:37.437 ************************************ 00:13:37.437 13:47:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.695 [2024-11-04 13:47:24.455105] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:37.695 [2024-11-04 13:47:24.455349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62655 ] 00:13:37.953 [2024-11-04 13:47:24.666357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.953 [2024-11-04 13:47:24.842296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.953 [2024-11-04 13:47:24.842451] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:37.953 [2024-11-04 13:47:24.842490] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:37.953 [2024-11-04 13:47:24.842512] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.521 00:13:38.521 real 0m0.850s 00:13:38.521 user 0m0.572s 00:13:38.521 sys 0m0.169s 00:13:38.521 13:47:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.521 ************************************ 00:13:38.521 END TEST bdev_json_nonarray 00:13:38.521 13:47:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:38.521 ************************************ 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:38.521 13:47:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:38.521 00:13:38.521 real 0m48.133s 00:13:38.521 user 1m10.538s 00:13:38.521 sys 0m8.502s 00:13:38.521 13:47:25 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.521 ************************************ 00:13:38.521 END TEST blockdev_nvme 00:13:38.521 13:47:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.521 ************************************ 00:13:38.521 13:47:25 -- spdk/autotest.sh@209 -- # uname -s 00:13:38.521 13:47:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:38.521 13:47:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:38.521 13:47:25 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:38.521 13:47:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.521 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:13:38.521 ************************************ 00:13:38.521 START TEST blockdev_nvme_gpt 00:13:38.521 ************************************ 00:13:38.521 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:38.521 * Looking for test storage... 00:13:38.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:38.521 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:38.521 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:38.521 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:13:38.779 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.779 13:47:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:38.779 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.779 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:38.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.779 --rc genhtml_branch_coverage=1 00:13:38.779 --rc genhtml_function_coverage=1 00:13:38.779 --rc genhtml_legend=1 00:13:38.779 --rc geninfo_all_blocks=1 00:13:38.779 --rc geninfo_unexecuted_blocks=1 00:13:38.779 00:13:38.779 ' 00:13:38.779 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:38.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.779 --rc genhtml_branch_coverage=1 00:13:38.780 --rc genhtml_function_coverage=1 00:13:38.780 --rc genhtml_legend=1 00:13:38.780 --rc geninfo_all_blocks=1 00:13:38.780 --rc geninfo_unexecuted_blocks=1 00:13:38.780 00:13:38.780 ' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:38.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.780 --rc genhtml_branch_coverage=1 00:13:38.780 --rc genhtml_function_coverage=1 00:13:38.780 --rc genhtml_legend=1 00:13:38.780 --rc geninfo_all_blocks=1 00:13:38.780 --rc geninfo_unexecuted_blocks=1 00:13:38.780 00:13:38.780 ' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:38.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.780 --rc genhtml_branch_coverage=1 00:13:38.780 --rc genhtml_function_coverage=1 00:13:38.780 --rc genhtml_legend=1 00:13:38.780 --rc geninfo_all_blocks=1 00:13:38.780 --rc geninfo_unexecuted_blocks=1 00:13:38.780 00:13:38.780 ' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62739 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62739 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 62739 ']' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.780 13:47:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:38.780 13:47:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 [2024-11-04 13:47:25.635810] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:38.780 [2024-11-04 13:47:25.635993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62739 ] 00:13:39.039 [2024-11-04 13:47:25.839205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.297 [2024-11-04 13:47:26.010407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.232 13:47:26 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.232 13:47:26 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:13:40.232 13:47:26 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:40.232 13:47:26 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:13:40.232 13:47:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:40.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:40.776 Waiting for block devices as requested 00:13:40.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:41.034 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:41.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:41.291 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:46.552 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:46.552 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:46.552 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:46.553 BYT; 00:13:46.553 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:46.553 BYT; 00:13:46.553 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.553 13:47:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:46.553 13:47:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:47.486 The operation has completed successfully. 00:13:47.486 13:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:48.419 The operation has completed successfully. 00:13:48.419 13:47:35 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:48.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.921 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.921 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.921 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:49.921 13:47:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.921 13:47:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:49.921 [] 00:13:49.921 13:47:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:49.921 13:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:49.921 13:47:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.921 13:47:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:50.489 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:50.489 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:50.490 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2e304f8d-913c-45da-b114-880f380286c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2e304f8d-913c-45da-b114-880f380286c9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e18f60a7-a2fa-4691-ba9e-17457b9b5b50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e18f60a7-a2fa-4691-ba9e-17457b9b5b50",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1e6c3230-4d5a-481f-b9f1-45a5392a7c65"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e6c3230-4d5a-481f-b9f1-45a5392a7c65",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3ae8c18f-fa70-48e4-b028-0ab161d4ffd1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ae8c18f-fa70-48e4-b028-0ab161d4ffd1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "00b39437-051c-4125-90be-55d223bbee8e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "00b39437-051c-4125-90be-55d223bbee8e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:50.490 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:50.490 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:13:50.490 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:50.490 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62739 00:13:50.490 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 62739 ']' 00:13:50.490 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 62739 00:13:50.490 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:13:50.490 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.490 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62739 00:13:50.749 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:50.749 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:50.749 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62739' 00:13:50.749 killing process with pid 62739 00:13:50.749 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 62739 00:13:50.749 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 62739 00:13:54.034 13:47:40 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:54.034 13:47:40 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:54.034 13:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:54.034 13:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.034 13:47:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:54.034 ************************************ 00:13:54.034 START TEST bdev_hello_world 00:13:54.034 ************************************ 00:13:54.034 13:47:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:54.034 [2024-11-04 13:47:40.434386] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:54.034 [2024-11-04 13:47:40.434588] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63387 ] 00:13:54.034 [2024-11-04 13:47:40.633865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.034 [2024-11-04 13:47:40.830996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.969 [2024-11-04 13:47:41.551332] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:54.969 [2024-11-04 13:47:41.551395] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:54.969 [2024-11-04 13:47:41.551429] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:54.969 [2024-11-04 13:47:41.555108] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:54.969 [2024-11-04 13:47:41.555623] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:54.969 [2024-11-04 13:47:41.555661] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:54.969 [2024-11-04 13:47:41.555853] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:54.969 00:13:54.969 [2024-11-04 13:47:41.555886] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:55.911 00:13:55.911 real 0m2.467s 00:13:55.911 user 0m2.042s 00:13:55.911 sys 0m0.306s 00:13:55.911 13:47:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.911 ************************************ 00:13:55.911 END TEST bdev_hello_world 00:13:55.911 ************************************ 00:13:55.911 13:47:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:56.169 13:47:42 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:56.169 13:47:42 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:56.169 13:47:42 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:56.169 13:47:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:56.169 ************************************ 00:13:56.169 START TEST bdev_bounds 00:13:56.169 ************************************ 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63436 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:56.169 Process bdevio pid: 63436 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63436' 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63436 00:13:56.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 63436 ']' 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.169 13:47:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:56.169 [2024-11-04 13:47:42.971095] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:56.169 [2024-11-04 13:47:42.971591] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63436 ] 00:13:56.427 [2024-11-04 13:47:43.168984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.427 [2024-11-04 13:47:43.303795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.427 [2024-11-04 13:47:43.304017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.427 [2024-11-04 13:47:43.304052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.363 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.363 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:57.363 13:47:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:57.363 I/O targets: 00:13:57.363 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:57.363 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:57.363 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:57.363 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:57.363 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:57.363 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:57.363 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:57.363 00:13:57.363 00:13:57.363 CUnit - A unit testing framework for C - Version 2.1-3 00:13:57.363 http://cunit.sourceforge.net/ 00:13:57.363 00:13:57.363 00:13:57.363 Suite: bdevio tests on: Nvme3n1 00:13:57.363 Test: blockdev write read block ...passed 00:13:57.363 Test: blockdev write zeroes read block ...passed 00:13:57.363 Test: blockdev write zeroes read no split ...passed 00:13:57.363 Test: blockdev write zeroes read split ...passed 00:13:57.363 Test: blockdev write zeroes read split partial ...passed 00:13:57.363 Test: blockdev reset ...[2024-11-04 13:47:44.222490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:57.363 [2024-11-04 13:47:44.226822] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:57.363 passed 00:13:57.363 Test: blockdev write read 8 blocks ...passed 00:13:57.363 Test: blockdev write read size > 128k ...passed 00:13:57.363 Test: blockdev write read invalid size ...passed 00:13:57.363 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.363 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.363 Test: blockdev write read max offset ...passed 00:13:57.363 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.363 Test: blockdev writev readv 8 blocks ...passed 00:13:57.363 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.363 Test: blockdev writev readv block ...passed 00:13:57.363 Test: blockdev writev readv size > 128k ...passed 00:13:57.363 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.363 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.236890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac204000 len:0x1000 00:13:57.363 passed 00:13:57.363 Test: blockdev nvme passthru rw ...[2024-11-04 13:47:44.237181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.363 passed 00:13:57.363 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:47:44.238004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:13:57.363 Test: blockdev nvme admin passthru ...RP2 0x0 00:13:57.363 [2024-11-04 13:47:44.238219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:57.363 passed 00:13:57.363 Test: blockdev copy ...passed 00:13:57.363 Suite: bdevio tests on: Nvme2n3 00:13:57.363 Test: blockdev write read block ...passed 00:13:57.363 Test: blockdev write zeroes read block ...passed 00:13:57.363 Test: blockdev write zeroes read no split ...passed 00:13:57.622 Test: blockdev write zeroes read split ...passed 00:13:57.622 Test: blockdev write zeroes read split partial ...passed 00:13:57.622 Test: blockdev reset ...[2024-11-04 13:47:44.320248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:57.622 [2024-11-04 13:47:44.325466] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:57.622 passed 00:13:57.622 Test: blockdev write read 8 blocks ...passed 00:13:57.622 Test: blockdev write read size > 128k ...passed 00:13:57.622 Test: blockdev write read invalid size ...passed 00:13:57.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.622 Test: blockdev write read max offset ...passed 00:13:57.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.622 Test: blockdev writev readv 8 blocks ...passed 00:13:57.622 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.622 Test: blockdev writev readv block ...passed 00:13:57.622 Test: blockdev writev readv size > 128k ...passed 00:13:57.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.622 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.334272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac202000 len:0x1000 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme passthru rw ...[2024-11-04 13:47:44.334634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:47:44.335491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:57.622 [2024-11-04 13:47:44.335616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme admin passthru ...passed 00:13:57.622 Test: blockdev copy ...passed 00:13:57.622 Suite: bdevio tests on: Nvme2n2 00:13:57.622 Test: blockdev write read block ...passed 00:13:57.622 Test: blockdev write zeroes read block ...passed 00:13:57.622 Test: blockdev write zeroes read no split ...passed 00:13:57.622 Test: blockdev write zeroes read split ...passed 00:13:57.622 Test: blockdev write zeroes read split partial ...passed 00:13:57.622 Test: blockdev reset ...[2024-11-04 13:47:44.425517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:57.622 [2024-11-04 13:47:44.430367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:57.622 passed 00:13:57.622 Test: blockdev write read 8 blocks ...passed 00:13:57.622 Test: blockdev write read size > 128k ...passed 00:13:57.622 Test: blockdev write read invalid size ...passed 00:13:57.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.622 Test: blockdev write read max offset ...passed 00:13:57.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.622 Test: blockdev writev readv 8 blocks ...passed 00:13:57.622 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.622 Test: blockdev writev readv block ...passed 00:13:57.622 Test: blockdev writev readv size > 128k ...passed 00:13:57.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.622 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.441150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be838000 len:0x1000 00:13:57.622 [2024-11-04 13:47:44.441495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme passthru rw ...passed 00:13:57.622 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:47:44.442648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:57.622 [2024-11-04 13:47:44.442868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme admin passthru ...passed 00:13:57.622 Test: blockdev copy ...passed 00:13:57.622 Suite: bdevio tests on: Nvme2n1 00:13:57.622 Test: blockdev write read block ...passed 00:13:57.622 Test: blockdev write zeroes read block ...passed 00:13:57.622 Test: blockdev write zeroes read no split ...passed 00:13:57.622 Test: blockdev write zeroes read split ...passed 00:13:57.622 Test: blockdev write zeroes read split partial ...passed 00:13:57.622 Test: blockdev reset ...[2024-11-04 13:47:44.519989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:57.622 [2024-11-04 13:47:44.524777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:13:57.622 Test: blockdev write read 8 blocks ...uccessful. 00:13:57.622 passed 00:13:57.622 Test: blockdev write read size > 128k ...passed 00:13:57.622 Test: blockdev write read invalid size ...passed 00:13:57.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.622 Test: blockdev write read max offset ...passed 00:13:57.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.622 Test: blockdev writev readv 8 blocks ...passed 00:13:57.622 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.622 Test: blockdev writev readv block ...passed 00:13:57.622 Test: blockdev writev readv size > 128k ...passed 00:13:57.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.622 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.533929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be834000 len:0x1000 00:13:57.622 passed 00:13:57.622 Test: blockdev nvme passthru rw ...[2024-11-04 13:47:44.534195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.622 passed 00:13:57.623 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:47:44.535029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:57.623 passed 00:13:57.623 Test: blockdev nvme admin passthru ...[2024-11-04 13:47:44.535270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:57.623 passed 00:13:57.623 Test: blockdev copy ...passed 00:13:57.623 Suite: bdevio tests on: Nvme1n1p2 00:13:57.623 Test: blockdev write read block ...passed 00:13:57.623 Test: blockdev write zeroes read block ...passed 00:13:57.881 Test: blockdev write zeroes read no split ...passed 00:13:57.881 Test: blockdev write zeroes read split ...passed 00:13:57.881 Test: blockdev write zeroes read split partial ...passed 00:13:57.881 Test: blockdev reset ...[2024-11-04 13:47:44.622254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:57.881 passed 00:13:57.881 Test: blockdev write read 8 blocks ...[2024-11-04 13:47:44.626618] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:57.881 passed 00:13:57.881 Test: blockdev write read size > 128k ...passed 00:13:57.881 Test: blockdev write read invalid size ...passed 00:13:57.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.881 Test: blockdev write read max offset ...passed 00:13:57.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.881 Test: blockdev writev readv 8 blocks ...passed 00:13:57.881 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.881 Test: blockdev writev readv block ...passed 00:13:57.881 Test: blockdev writev readv size > 128k ...passed 00:13:57.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.881 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.635185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2be830000 len:0x1000 00:13:57.881 passed 00:13:57.881 Test: blockdev nvme passthru rw ...passed 00:13:57.881 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.881 Test: blockdev nvme admin passthru ...passed 00:13:57.881 Test: blockdev copy ...[2024-11-04 13:47:44.635467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.881 passed 00:13:57.881 Suite: bdevio tests on: Nvme1n1p1 00:13:57.881 Test: blockdev write read block ...passed 00:13:57.881 Test: blockdev write zeroes read block ...passed 00:13:57.881 Test: blockdev write zeroes read no split ...passed 00:13:57.881 Test: blockdev write zeroes read split ...passed 00:13:57.881 Test: blockdev write zeroes read split partial ...passed 00:13:57.881 Test: blockdev reset ...[2024-11-04 13:47:44.709413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:57.881 [2024-11-04 13:47:44.713746] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:57.881 passed 00:13:57.881 Test: blockdev write read 8 blocks ...passed 00:13:57.881 Test: blockdev write read size > 128k ...passed 00:13:57.881 Test: blockdev write read invalid size ...passed 00:13:57.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:57.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:57.881 Test: blockdev write read max offset ...passed 00:13:57.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:57.881 Test: blockdev writev readv 8 blocks ...passed 00:13:57.881 Test: blockdev writev readv 30 x 1block ...passed 00:13:57.881 Test: blockdev writev readv block ...passed 00:13:57.881 Test: blockdev writev readv size > 128k ...passed 00:13:57.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:57.881 Test: blockdev comparev and writev ...[2024-11-04 13:47:44.722384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ac40e000 len:0x1000 00:13:57.881 passed 00:13:57.881 Test: blockdev nvme passthru rw ...passed 00:13:57.881 Test: blockdev nvme passthru vendor specific ...passed 00:13:57.881 Test: blockdev nvme admin passthru ...passed 00:13:57.882 Test: blockdev copy ...[2024-11-04 13:47:44.722691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:57.882 passed 00:13:57.882 Suite: bdevio tests on: Nvme0n1 00:13:57.882 Test: blockdev write read block ...passed 00:13:57.882 Test: blockdev write zeroes read block ...passed 00:13:57.882 Test: blockdev write zeroes read no split ...passed 00:13:57.882 Test: blockdev write zeroes read split ...passed 00:13:57.882 Test: blockdev write zeroes read split partial ...passed 00:13:57.882 Test: blockdev reset ...[2024-11-04 13:47:44.797837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:57.882 passed 00:13:57.882 Test: blockdev write read 8 blocks ...[2024-11-04 13:47:44.801878] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:57.882 passed 00:13:58.140 Test: blockdev write read size > 128k ...passed 00:13:58.140 Test: blockdev write read invalid size ...passed 00:13:58.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:58.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:58.140 Test: blockdev write read max offset ...passed 00:13:58.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:58.140 Test: blockdev writev readv 8 blocks ...passed 00:13:58.140 Test: blockdev writev readv 30 x 1block ...passed 00:13:58.140 Test: blockdev writev readv block ...passed 00:13:58.140 Test: blockdev writev readv size > 128k ...passed 00:13:58.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:58.140 Test: blockdev comparev and writev ...passed 00:13:58.140 Test: blockdev nvme passthru rw ...[2024-11-04 13:47:44.808837] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:58.140 separate metadata which is not supported yet. 00:13:58.140 passed 00:13:58.140 Test: blockdev nvme passthru vendor specific ...[2024-11-04 13:47:44.809429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:58.140 passed 00:13:58.140 Test: blockdev nvme admin passthru ...[2024-11-04 13:47:44.809667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:58.140 passed 00:13:58.140 Test: blockdev copy ...passed 00:13:58.140 00:13:58.140 Run Summary: Type Total Ran Passed Failed Inactive 00:13:58.140 suites 7 7 n/a 0 0 00:13:58.140 tests 161 161 161 0 0 00:13:58.140 asserts 1025 1025 1025 0 n/a 00:13:58.140 00:13:58.140 Elapsed time = 1.845 seconds 00:13:58.140 0 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63436 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 63436 ']' 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 63436 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63436 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63436' 00:13:58.140 killing process with pid 63436 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 63436 00:13:58.140 13:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 63436 00:13:59.547 ************************************ 00:13:59.548 END TEST bdev_bounds 00:13:59.548 ************************************ 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:59.548 00:13:59.548 real 0m3.200s 00:13:59.548 user 0m8.164s 00:13:59.548 sys 0m0.494s 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:59.548 13:47:46 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:59.548 13:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:59.548 13:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:59.548 13:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:59.548 ************************************ 00:13:59.548 START TEST bdev_nbd 00:13:59.548 ************************************ 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63501 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63501 /var/tmp/spdk-nbd.sock 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 63501 ']' 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.548 13:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:59.548 [2024-11-04 13:47:46.195268] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:13:59.548 [2024-11-04 13:47:46.195828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.548 [2024-11-04 13:47:46.368744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.806 [2024-11-04 13:47:46.496136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.372 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.372 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:14:00.372 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:00.373 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.631 1+0 records in 00:14:00.631 1+0 records out 00:14:00.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460573 s, 8.9 MB/s 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:00.631 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.890 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:00.890 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:00.890 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:00.890 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:00.890 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.150 1+0 records in 00:14:01.150 1+0 records out 00:14:01.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547386 s, 7.5 MB/s 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:01.150 13:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:01.408 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.409 1+0 records in 00:14:01.409 1+0 records out 00:14:01.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465405 s, 8.8 MB/s 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:01.409 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.668 1+0 records in 00:14:01.668 1+0 records out 00:14:01.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000764142 s, 5.4 MB/s 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:01.668 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.236 1+0 records in 00:14:02.236 1+0 records out 00:14:02.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752472 s, 5.4 MB/s 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:02.236 13:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:02.494 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.494 1+0 records in 00:14:02.494 1+0 records out 00:14:02.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836401 s, 4.9 MB/s 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:02.495 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.753 1+0 records in 00:14:02.753 1+0 records out 00:14:02.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873976 s, 4.7 MB/s 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:14:02.753 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd0", 00:14:03.011 "bdev_name": "Nvme0n1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd1", 00:14:03.011 "bdev_name": "Nvme1n1p1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd2", 00:14:03.011 "bdev_name": "Nvme1n1p2" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd3", 00:14:03.011 "bdev_name": "Nvme2n1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd4", 00:14:03.011 "bdev_name": "Nvme2n2" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd5", 00:14:03.011 "bdev_name": "Nvme2n3" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd6", 00:14:03.011 "bdev_name": "Nvme3n1" 00:14:03.011 } 00:14:03.011 ]' 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd0", 00:14:03.011 "bdev_name": "Nvme0n1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd1", 00:14:03.011 "bdev_name": "Nvme1n1p1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd2", 00:14:03.011 "bdev_name": "Nvme1n1p2" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd3", 00:14:03.011 "bdev_name": "Nvme2n1" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd4", 00:14:03.011 "bdev_name": "Nvme2n2" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd5", 00:14:03.011 "bdev_name": "Nvme2n3" 00:14:03.011 }, 00:14:03.011 { 00:14:03.011 "nbd_device": "/dev/nbd6", 00:14:03.011 "bdev_name": "Nvme3n1" 00:14:03.011 } 00:14:03.011 ]' 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.011 13:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.577 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.835 13:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.403 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:04.661 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.919 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.182 13:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.463 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:05.721 /dev/nbd0 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.721 1+0 records in 00:14:05.721 1+0 records out 00:14:05.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568992 s, 7.2 MB/s 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.721 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:14:05.980 /dev/nbd1 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.980 1+0 records in 00:14:05.980 1+0 records out 00:14:05.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741171 s, 5.5 MB/s 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:05.980 13:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:14:06.239 /dev/nbd10 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.239 1+0 records in 00:14:06.239 1+0 records out 00:14:06.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578873 s, 7.1 MB/s 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:06.239 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:14:06.497 /dev/nbd11 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.497 1+0 records in 00:14:06.497 1+0 records out 00:14:06.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691794 s, 5.9 MB/s 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:06.497 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:14:06.756 /dev/nbd12 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.014 1+0 records in 00:14:07.014 1+0 records out 00:14:07.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858903 s, 4.8 MB/s 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.014 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:07.015 13:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:14:07.273 /dev/nbd13 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.273 1+0 records in 00:14:07.273 1+0 records out 00:14:07.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933505 s, 4.4 MB/s 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:07.273 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:14:07.532 /dev/nbd14 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.532 1+0 records in 00:14:07.532 1+0 records out 00:14:07.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921105 s, 4.4 MB/s 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.532 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:07.791 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd0", 00:14:07.791 "bdev_name": "Nvme0n1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd1", 00:14:07.791 "bdev_name": "Nvme1n1p1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd10", 00:14:07.791 "bdev_name": "Nvme1n1p2" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd11", 00:14:07.791 "bdev_name": "Nvme2n1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd12", 00:14:07.791 "bdev_name": "Nvme2n2" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd13", 00:14:07.791 "bdev_name": "Nvme2n3" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd14", 00:14:07.791 "bdev_name": "Nvme3n1" 00:14:07.791 } 00:14:07.791 ]' 00:14:07.791 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd0", 00:14:07.791 "bdev_name": "Nvme0n1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd1", 00:14:07.791 "bdev_name": "Nvme1n1p1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd10", 00:14:07.791 "bdev_name": "Nvme1n1p2" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd11", 00:14:07.791 "bdev_name": "Nvme2n1" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd12", 00:14:07.791 "bdev_name": "Nvme2n2" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd13", 00:14:07.791 "bdev_name": "Nvme2n3" 00:14:07.791 }, 00:14:07.791 { 00:14:07.791 "nbd_device": "/dev/nbd14", 00:14:07.791 "bdev_name": "Nvme3n1" 00:14:07.791 } 00:14:07.791 ]' 00:14:07.791 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:08.050 /dev/nbd1 00:14:08.050 /dev/nbd10 00:14:08.050 /dev/nbd11 00:14:08.050 /dev/nbd12 00:14:08.050 /dev/nbd13 00:14:08.050 /dev/nbd14' 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:08.050 /dev/nbd1 00:14:08.050 /dev/nbd10 00:14:08.050 /dev/nbd11 00:14:08.050 /dev/nbd12 00:14:08.050 /dev/nbd13 00:14:08.050 /dev/nbd14' 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:08.050 256+0 records in 00:14:08.050 256+0 records out 00:14:08.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100218 s, 105 MB/s 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:08.050 256+0 records in 00:14:08.050 256+0 records out 00:14:08.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168419 s, 6.2 MB/s 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.050 13:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:08.309 256+0 records in 00:14:08.309 256+0 records out 00:14:08.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196687 s, 5.3 MB/s 00:14:08.309 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.309 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:08.566 256+0 records in 00:14:08.566 256+0 records out 00:14:08.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177899 s, 5.9 MB/s 00:14:08.566 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.566 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:08.825 256+0 records in 00:14:08.825 256+0 records out 00:14:08.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17498 s, 6.0 MB/s 00:14:08.825 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.825 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:08.825 256+0 records in 00:14:08.825 256+0 records out 00:14:08.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172512 s, 6.1 MB/s 00:14:08.825 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.825 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:09.083 256+0 records in 00:14:09.083 256+0 records out 00:14:09.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166449 s, 6.3 MB/s 00:14:09.083 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.083 13:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:09.342 256+0 records in 00:14:09.342 256+0 records out 00:14:09.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167613 s, 6.3 MB/s 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.342 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.601 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.860 13:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:10.131 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.390 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.649 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.907 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.166 13:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.425 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:11.682 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:11.941 malloc_lvol_verify 00:14:11.941 13:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:12.199 9d1324cf-d192-4e46-a69b-374ebc05d65f 00:14:12.199 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:12.457 80dc5d55-0a14-4c47-9054-b21f7ecc865e 00:14:12.457 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:12.716 /dev/nbd0 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:12.716 mke2fs 1.47.0 (5-Feb-2023) 00:14:12.716 Discarding device blocks: 0/4096 done 00:14:12.716 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:12.716 00:14:12.716 Allocating group tables: 0/1 done 00:14:12.716 Writing inode tables: 0/1 done 00:14:12.716 Creating journal (1024 blocks): done 00:14:12.716 Writing superblocks and filesystem accounting information: 0/1 done 00:14:12.716 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.716 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63501 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 63501 ']' 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 63501 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63501 00:14:12.973 killing process with pid 63501 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63501' 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 63501 00:14:12.973 13:47:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 63501 00:14:14.347 13:48:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:14.347 00:14:14.347 real 0m15.118s 00:14:14.347 user 0m20.051s 00:14:14.347 sys 0m6.283s 00:14:14.347 13:48:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.347 ************************************ 00:14:14.347 END TEST bdev_nbd 00:14:14.347 ************************************ 00:14:14.347 13:48:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:14:14.606 skipping fio tests on NVMe due to multi-ns failures. 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:14.606 13:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:14.606 13:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:14:14.606 13:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.606 13:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:14.606 ************************************ 00:14:14.606 START TEST bdev_verify 00:14:14.606 ************************************ 00:14:14.606 13:48:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:14.606 [2024-11-04 13:48:01.400437] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:14.606 [2024-11-04 13:48:01.400682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63959 ] 00:14:14.863 [2024-11-04 13:48:01.605818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.863 [2024-11-04 13:48:01.757438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.863 [2024-11-04 13:48:01.757471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.796 Running I/O for 5 seconds... 00:14:18.111 19520.00 IOPS, 76.25 MiB/s [2024-11-04T13:48:05.967Z] 19040.00 IOPS, 74.38 MiB/s [2024-11-04T13:48:06.930Z] 18645.33 IOPS, 72.83 MiB/s [2024-11-04T13:48:07.865Z] 18288.00 IOPS, 71.44 MiB/s [2024-11-04T13:48:07.865Z] 18214.40 IOPS, 71.15 MiB/s 00:14:20.943 Latency(us) 00:14:20.943 [2024-11-04T13:48:07.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.943 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0xbd0bd 00:14:20.943 Nvme0n1 : 5.10 1306.27 5.10 0.00 0.00 97756.55 20472.20 90377.26 00:14:20.943 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:20.943 Nvme0n1 : 5.10 1242.34 4.85 0.00 0.00 102361.01 13481.69 93373.20 00:14:20.943 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x4ff80 00:14:20.943 Nvme1n1p1 : 5.10 1305.72 5.10 0.00 0.00 97616.43 22094.99 85883.37 00:14:20.943 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x4ff80 length 0x4ff80 00:14:20.943 Nvme1n1p1 : 5.10 1241.51 4.85 0.00 0.00 102216.41 14730.00 86382.69 00:14:20.943 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x4ff7f 00:14:20.943 Nvme1n1p2 : 5.10 1305.20 5.10 0.00 0.00 97343.91 21720.50 87381.33 00:14:20.943 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:14:20.943 Nvme1n1p2 : 5.12 1249.76 4.88 0.00 0.00 101710.00 12795.12 84385.40 00:14:20.943 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x80000 00:14:20.943 Nvme2n1 : 5.10 1304.40 5.10 0.00 0.00 97232.09 23343.30 85384.05 00:14:20.943 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x80000 length 0x80000 00:14:20.943 Nvme2n1 : 5.12 1249.32 4.88 0.00 0.00 101516.59 13232.03 80890.15 00:14:20.943 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x80000 00:14:20.943 Nvme2n2 : 5.11 1303.76 5.09 0.00 0.00 97067.99 23842.62 82887.44 00:14:20.943 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x80000 length 0x80000 00:14:20.943 Nvme2n2 : 5.12 1248.88 4.88 0.00 0.00 101319.39 13544.11 80890.15 00:14:20.943 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x80000 00:14:20.943 Nvme2n3 : 5.11 1303.23 5.09 0.00 0.00 96891.09 20971.52 84884.72 00:14:20.943 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x80000 length 0x80000 00:14:20.943 Nvme2n3 : 5.13 1248.45 4.88 0.00 0.00 101131.14 13981.01 85883.37 00:14:20.943 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:20.943 Verification LBA range: start 0x0 length 0x20000 00:14:20.943 Nvme3n1 : 5.11 1302.71 5.09 0.00 0.00 96726.92 12420.63 85883.37 00:14:20.944 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.944 Verification LBA range: start 0x20000 length 0x20000 00:14:20.944 Nvme3n1 : 5.13 1248.01 4.88 0.00 0.00 100957.55 13356.86 91375.91 00:14:20.944 [2024-11-04T13:48:07.866Z] =================================================================================================================== 00:14:20.944 [2024-11-04T13:48:07.866Z] Total : 17859.57 69.76 0.00 0.00 99370.74 12420.63 93373.20 00:14:22.844 00:14:22.844 real 0m8.054s 00:14:22.844 user 0m14.732s 00:14:22.844 sys 0m0.335s 00:14:22.844 13:48:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.844 ************************************ 00:14:22.844 END TEST bdev_verify 00:14:22.844 ************************************ 00:14:22.844 13:48:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:22.844 13:48:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:22.844 13:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:14:22.844 13:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.844 13:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:22.844 ************************************ 00:14:22.844 START TEST bdev_verify_big_io 00:14:22.844 ************************************ 00:14:22.844 13:48:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:22.844 [2024-11-04 13:48:09.519832] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:22.844 [2024-11-04 13:48:09.520025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64068 ] 00:14:22.844 [2024-11-04 13:48:09.715187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.101 [2024-11-04 13:48:09.842426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.101 [2024-11-04 13:48:09.842459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.036 Running I/O for 5 seconds... 00:14:27.315 816.00 IOPS, 51.00 MiB/s [2024-11-04T13:48:16.140Z] 1711.00 IOPS, 106.94 MiB/s [2024-11-04T13:48:16.745Z] 2099.00 IOPS, 131.19 MiB/s [2024-11-04T13:48:16.745Z] 2644.25 IOPS, 165.27 MiB/s 00:14:29.823 Latency(us) 00:14:29.823 [2024-11-04T13:48:16.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.823 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0xbd0b 00:14:29.823 Nvme0n1 : 5.73 133.95 8.37 0.00 0.00 923278.79 18474.91 862828.25 00:14:29.823 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:29.823 Nvme0n1 : 5.72 117.30 7.33 0.00 0.00 1037357.07 18474.91 1006632.96 00:14:29.823 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x4ff8 00:14:29.823 Nvme1n1p1 : 5.66 135.72 8.48 0.00 0.00 903220.34 76895.57 882801.13 00:14:29.823 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x4ff8 length 0x4ff8 00:14:29.823 Nvme1n1p1 : 5.72 123.02 7.69 0.00 0.00 983423.36 79392.18 942719.76 00:14:29.823 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x4ff7 00:14:29.823 Nvme1n1p2 : 5.74 133.87 8.37 0.00 0.00 884592.97 88379.98 902774.00 00:14:29.823 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x4ff7 length 0x4ff7 00:14:29.823 Nvme1n1p2 : 5.72 120.18 7.51 0.00 0.00 977129.06 90377.26 942719.76 00:14:29.823 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x8000 00:14:29.823 Nvme2n1 : 5.66 135.61 8.48 0.00 0.00 864838.87 87381.33 918752.30 00:14:29.823 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x8000 length 0x8000 00:14:29.823 Nvme2n1 : 5.82 118.44 7.40 0.00 0.00 968172.31 58670.32 1709678.20 00:14:29.823 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x8000 00:14:29.823 Nvme2n2 : 5.74 137.82 8.61 0.00 0.00 829336.76 73400.32 926741.46 00:14:29.823 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x8000 length 0x8000 00:14:29.823 Nvme2n2 : 5.82 123.03 7.69 0.00 0.00 911989.67 30833.13 1749623.95 00:14:29.823 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x8000 00:14:29.823 Nvme2n3 : 5.79 150.87 9.43 0.00 0.00 749631.30 5242.88 922746.88 00:14:29.823 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x8000 length 0x8000 00:14:29.823 Nvme2n3 : 5.88 135.98 8.50 0.00 0.00 806283.33 15791.06 1525927.74 00:14:29.823 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x0 length 0x2000 00:14:29.823 Nvme3n1 : 5.79 154.77 9.67 0.00 0.00 715116.46 6459.98 930736.03 00:14:29.823 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:29.823 Verification LBA range: start 0x2000 length 0x2000 00:14:29.823 Nvme3n1 : 5.88 149.57 9.35 0.00 0.00 720553.14 1575.98 1789569.71 00:14:29.823 [2024-11-04T13:48:16.745Z] =================================================================================================================== 00:14:29.823 [2024-11-04T13:48:16.745Z] Total : 1870.12 116.88 0.00 0.00 868031.98 1575.98 1789569.71 00:14:32.367 00:14:32.367 real 0m9.322s 00:14:32.367 user 0m17.214s 00:14:32.367 sys 0m0.364s 00:14:32.367 13:48:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:32.367 13:48:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 ************************************ 00:14:32.367 END TEST bdev_verify_big_io 00:14:32.367 ************************************ 00:14:32.367 13:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:32.367 13:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:32.367 13:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:32.367 13:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 ************************************ 00:14:32.367 START TEST bdev_write_zeroes 00:14:32.367 ************************************ 00:14:32.367 13:48:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:32.367 [2024-11-04 13:48:18.903820] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:32.367 [2024-11-04 13:48:18.903998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64183 ] 00:14:32.367 [2024-11-04 13:48:19.097992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.367 [2024-11-04 13:48:19.221749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.300 Running I/O for 1 seconds... 00:14:34.281 52864.00 IOPS, 206.50 MiB/s 00:14:34.281 Latency(us) 00:14:34.281 [2024-11-04T13:48:21.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.281 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme0n1 : 1.03 7537.27 29.44 0.00 0.00 16923.19 13232.03 31582.11 00:14:34.281 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme1n1p1 : 1.03 7527.82 29.41 0.00 0.00 16917.61 13294.45 32206.26 00:14:34.281 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme1n1p2 : 1.03 7518.63 29.37 0.00 0.00 16874.99 13169.62 30084.14 00:14:34.281 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme2n1 : 1.03 7511.21 29.34 0.00 0.00 16779.09 13668.94 24341.94 00:14:34.281 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme2n2 : 1.03 7557.03 29.52 0.00 0.00 16666.15 7957.94 23592.96 00:14:34.281 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme2n3 : 1.03 7548.88 29.49 0.00 0.00 16643.73 8051.57 24217.11 00:14:34.281 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:34.281 Nvme3n1 : 1.04 7541.63 29.46 0.00 0.00 16605.42 8301.23 25964.74 00:14:34.281 [2024-11-04T13:48:21.203Z] =================================================================================================================== 00:14:34.281 [2024-11-04T13:48:21.203Z] Total : 52742.46 206.03 0.00 0.00 16772.41 7957.94 32206.26 00:14:35.658 ************************************ 00:14:35.658 END TEST bdev_write_zeroes 00:14:35.658 00:14:35.658 real 0m3.410s 00:14:35.658 user 0m2.989s 00:14:35.658 sys 0m0.299s 00:14:35.658 13:48:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:35.658 13:48:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:35.658 ************************************ 00:14:35.658 13:48:22 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:35.658 13:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:35.658 13:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:35.658 13:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:35.658 ************************************ 00:14:35.658 START TEST bdev_json_nonenclosed 00:14:35.658 ************************************ 00:14:35.658 13:48:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:35.658 [2024-11-04 13:48:22.379728] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:35.658 [2024-11-04 13:48:22.379909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64236 ] 00:14:35.658 [2024-11-04 13:48:22.573457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.915 [2024-11-04 13:48:22.691522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.915 [2024-11-04 13:48:22.691666] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:35.915 [2024-11-04 13:48:22.691694] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:35.915 [2024-11-04 13:48:22.691707] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:36.172 ************************************ 00:14:36.172 END TEST bdev_json_nonenclosed 00:14:36.172 ************************************ 00:14:36.172 00:14:36.172 real 0m0.699s 00:14:36.172 user 0m0.437s 00:14:36.172 sys 0m0.156s 00:14:36.172 13:48:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.173 13:48:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:36.173 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:36.173 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:36.173 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.173 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:36.173 ************************************ 00:14:36.173 START TEST bdev_json_nonarray 00:14:36.173 ************************************ 00:14:36.173 13:48:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:36.430 [2024-11-04 13:48:23.141486] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:36.430 [2024-11-04 13:48:23.141685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64267 ] 00:14:36.430 [2024-11-04 13:48:23.334205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.689 [2024-11-04 13:48:23.452482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.689 [2024-11-04 13:48:23.452642] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:36.689 [2024-11-04 13:48:23.452675] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:36.689 [2024-11-04 13:48:23.452688] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:36.948 ************************************ 00:14:36.948 END TEST bdev_json_nonarray 00:14:36.948 ************************************ 00:14:36.948 00:14:36.948 real 0m0.697s 00:14:36.948 user 0m0.423s 00:14:36.948 sys 0m0.167s 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:36.948 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:14:36.948 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:14:36.948 13:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:36.948 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:36.948 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.948 13:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:36.948 ************************************ 00:14:36.948 START TEST bdev_gpt_uuid 00:14:36.948 ************************************ 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64298 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64298 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 64298 ']' 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:36.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:36.948 13:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:37.207 [2024-11-04 13:48:23.925781] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:37.207 [2024-11-04 13:48:23.925978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64298 ] 00:14:37.207 [2024-11-04 13:48:24.121204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.466 [2024-11-04 13:48:24.241884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.401 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:38.401 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:14:38.401 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:38.401 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.401 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:38.659 Some configs were skipped because the RPC state that can call them passed over. 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.659 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:14:38.659 { 00:14:38.659 "name": "Nvme1n1p1", 00:14:38.659 "aliases": [ 00:14:38.659 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:38.659 ], 00:14:38.659 "product_name": "GPT Disk", 00:14:38.659 "block_size": 4096, 00:14:38.659 "num_blocks": 655104, 00:14:38.659 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:38.659 "assigned_rate_limits": { 00:14:38.659 "rw_ios_per_sec": 0, 00:14:38.659 "rw_mbytes_per_sec": 0, 00:14:38.659 "r_mbytes_per_sec": 0, 00:14:38.659 "w_mbytes_per_sec": 0 00:14:38.659 }, 00:14:38.659 "claimed": false, 00:14:38.659 "zoned": false, 00:14:38.659 "supported_io_types": { 00:14:38.659 "read": true, 00:14:38.659 "write": true, 00:14:38.659 "unmap": true, 00:14:38.659 "flush": true, 00:14:38.659 "reset": true, 00:14:38.659 "nvme_admin": false, 00:14:38.659 "nvme_io": false, 00:14:38.659 "nvme_io_md": false, 00:14:38.659 "write_zeroes": true, 00:14:38.659 "zcopy": false, 00:14:38.659 "get_zone_info": false, 00:14:38.659 "zone_management": false, 00:14:38.660 "zone_append": false, 00:14:38.660 "compare": true, 00:14:38.660 "compare_and_write": false, 00:14:38.660 "abort": true, 00:14:38.660 "seek_hole": false, 00:14:38.660 "seek_data": false, 00:14:38.660 "copy": true, 00:14:38.660 "nvme_iov_md": false 00:14:38.660 }, 00:14:38.660 "driver_specific": { 00:14:38.660 "gpt": { 00:14:38.660 "base_bdev": "Nvme1n1", 00:14:38.660 "offset_blocks": 256, 00:14:38.660 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:38.660 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:38.660 "partition_name": "SPDK_TEST_first" 00:14:38.660 } 00:14:38.660 } 00:14:38.660 } 00:14:38.660 ]' 00:14:38.660 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:14:38.660 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:14:38.660 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:14:38.660 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:14:38.934 { 00:14:38.934 "name": "Nvme1n1p2", 00:14:38.934 "aliases": [ 00:14:38.934 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:38.934 ], 00:14:38.934 "product_name": "GPT Disk", 00:14:38.934 "block_size": 4096, 00:14:38.934 "num_blocks": 655103, 00:14:38.934 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:38.934 "assigned_rate_limits": { 00:14:38.934 "rw_ios_per_sec": 0, 00:14:38.934 "rw_mbytes_per_sec": 0, 00:14:38.934 "r_mbytes_per_sec": 0, 00:14:38.934 "w_mbytes_per_sec": 0 00:14:38.934 }, 00:14:38.934 "claimed": false, 00:14:38.934 "zoned": false, 00:14:38.934 "supported_io_types": { 00:14:38.934 "read": true, 00:14:38.934 "write": true, 00:14:38.934 "unmap": true, 00:14:38.934 "flush": true, 00:14:38.934 "reset": true, 00:14:38.934 "nvme_admin": false, 00:14:38.934 "nvme_io": false, 00:14:38.934 "nvme_io_md": false, 00:14:38.934 "write_zeroes": true, 00:14:38.934 "zcopy": false, 00:14:38.934 "get_zone_info": false, 00:14:38.934 "zone_management": false, 00:14:38.934 "zone_append": false, 00:14:38.934 "compare": true, 00:14:38.934 "compare_and_write": false, 00:14:38.934 "abort": true, 00:14:38.934 "seek_hole": false, 00:14:38.934 "seek_data": false, 00:14:38.934 "copy": true, 00:14:38.934 "nvme_iov_md": false 00:14:38.934 }, 00:14:38.934 "driver_specific": { 00:14:38.934 "gpt": { 00:14:38.934 "base_bdev": "Nvme1n1", 00:14:38.934 "offset_blocks": 655360, 00:14:38.934 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:38.934 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:38.934 "partition_name": "SPDK_TEST_second" 00:14:38.934 } 00:14:38.934 } 00:14:38.934 } 00:14:38.934 ]' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64298 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 64298 ']' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 64298 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64298 00:14:38.934 killing process with pid 64298 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64298' 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 64298 00:14:38.934 13:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 64298 00:14:41.519 00:14:41.519 real 0m4.495s 00:14:41.519 user 0m4.643s 00:14:41.519 sys 0m0.570s 00:14:41.519 13:48:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.519 ************************************ 00:14:41.519 END TEST bdev_gpt_uuid 00:14:41.519 ************************************ 00:14:41.519 13:48:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:14:41.519 13:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:42.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:42.343 Waiting for block devices as requested 00:14:42.343 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.343 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.601 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.601 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:47.873 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:47.873 13:48:34 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:14:47.873 13:48:34 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:14:47.873 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:47.873 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:47.873 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:47.873 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:47.873 13:48:34 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:14:47.873 00:14:47.873 real 1m9.474s 00:14:47.873 user 1m27.158s 00:14:47.873 sys 0m12.921s 00:14:47.873 13:48:34 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:47.873 13:48:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:47.873 ************************************ 00:14:47.873 END TEST blockdev_nvme_gpt 00:14:47.873 ************************************ 00:14:48.132 13:48:34 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:48.132 13:48:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:48.132 13:48:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.132 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:14:48.132 ************************************ 00:14:48.132 START TEST nvme 00:14:48.132 ************************************ 00:14:48.132 13:48:34 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:48.132 * Looking for test storage... 00:14:48.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:48.132 13:48:34 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:48.132 13:48:34 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:48.132 13:48:34 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:48.132 13:48:34 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:48.132 13:48:34 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.132 13:48:34 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.132 13:48:34 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.132 13:48:34 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.132 13:48:34 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.132 13:48:34 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.132 13:48:35 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.132 13:48:35 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.132 13:48:35 nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:48.132 13:48:35 nvme -- scripts/common.sh@345 -- # : 1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.132 13:48:35 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.132 13:48:35 nvme -- scripts/common.sh@365 -- # decimal 1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@353 -- # local d=1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.132 13:48:35 nvme -- scripts/common.sh@355 -- # echo 1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.132 13:48:35 nvme -- scripts/common.sh@366 -- # decimal 2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@353 -- # local d=2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.132 13:48:35 nvme -- scripts/common.sh@355 -- # echo 2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.132 13:48:35 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.132 13:48:35 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.132 13:48:35 nvme -- scripts/common.sh@368 -- # return 0 00:14:48.132 13:48:35 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.132 13:48:35 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.132 --rc genhtml_branch_coverage=1 00:14:48.132 --rc genhtml_function_coverage=1 00:14:48.132 --rc genhtml_legend=1 00:14:48.132 --rc geninfo_all_blocks=1 00:14:48.132 --rc geninfo_unexecuted_blocks=1 00:14:48.132 00:14:48.132 ' 00:14:48.132 13:48:35 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.132 --rc genhtml_branch_coverage=1 00:14:48.132 --rc genhtml_function_coverage=1 00:14:48.132 --rc genhtml_legend=1 00:14:48.132 --rc geninfo_all_blocks=1 00:14:48.132 --rc geninfo_unexecuted_blocks=1 00:14:48.132 00:14:48.132 ' 00:14:48.132 13:48:35 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.132 --rc genhtml_branch_coverage=1 00:14:48.132 --rc genhtml_function_coverage=1 00:14:48.132 --rc genhtml_legend=1 00:14:48.132 --rc geninfo_all_blocks=1 00:14:48.132 --rc geninfo_unexecuted_blocks=1 00:14:48.132 00:14:48.132 ' 00:14:48.132 13:48:35 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.132 --rc genhtml_branch_coverage=1 00:14:48.132 --rc genhtml_function_coverage=1 00:14:48.132 --rc genhtml_legend=1 00:14:48.132 --rc geninfo_all_blocks=1 00:14:48.132 --rc geninfo_unexecuted_blocks=1 00:14:48.132 00:14:48.132 ' 00:14:48.132 13:48:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:48.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.636 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.636 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.636 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.636 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.636 13:48:36 nvme -- nvme/nvme.sh@79 -- # uname 00:14:49.636 13:48:36 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:14:49.636 13:48:36 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:14:49.636 13:48:36 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1073 -- # stubpid=64956 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:14:49.636 Waiting for stub to ready for secondary processes... 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64956 ]] 00:14:49.636 13:48:36 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:14:49.904 [2024-11-04 13:48:36.587434] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:14:49.904 [2024-11-04 13:48:36.587653] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:14:50.840 13:48:37 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:50.840 13:48:37 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64956 ]] 00:14:50.840 13:48:37 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:14:50.840 [2024-11-04 13:48:37.667535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.099 [2024-11-04 13:48:37.838944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.099 [2024-11-04 13:48:37.838999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.099 [2024-11-04 13:48:37.839007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.099 [2024-11-04 13:48:37.865087] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:14:51.099 [2024-11-04 13:48:37.865160] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:51.099 [2024-11-04 13:48:37.873753] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:51.099 [2024-11-04 13:48:37.873871] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:51.099 [2024-11-04 13:48:37.876467] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:51.099 [2024-11-04 13:48:37.876766] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:14:51.099 [2024-11-04 13:48:37.876855] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:14:51.099 [2024-11-04 13:48:37.881067] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:51.099 [2024-11-04 13:48:37.881332] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:14:51.099 [2024-11-04 13:48:37.881419] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:14:51.099 [2024-11-04 13:48:37.885402] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:51.099 [2024-11-04 13:48:37.885702] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:14:51.099 [2024-11-04 13:48:37.885798] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:14:51.099 [2024-11-04 13:48:37.885864] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:14:51.099 [2024-11-04 13:48:37.885928] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:14:51.666 13:48:38 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:51.666 done. 00:14:51.666 13:48:38 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:14:51.666 13:48:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:51.666 13:48:38 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:14:51.666 13:48:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.666 13:48:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.666 ************************************ 00:14:51.666 START TEST nvme_reset 00:14:51.666 ************************************ 00:14:51.666 13:48:38 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:52.233 Initializing NVMe Controllers 00:14:52.233 Skipping QEMU NVMe SSD at 0000:00:10.0 00:14:52.233 Skipping QEMU NVMe SSD at 0000:00:11.0 00:14:52.233 Skipping QEMU NVMe SSD at 0000:00:13.0 00:14:52.233 Skipping QEMU NVMe SSD at 0000:00:12.0 00:14:52.233 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:14:52.233 00:14:52.233 real 0m0.370s 00:14:52.233 user 0m0.138s 00:14:52.233 sys 0m0.178s 00:14:52.233 13:48:38 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.233 13:48:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:14:52.233 ************************************ 00:14:52.233 END TEST nvme_reset 00:14:52.233 ************************************ 00:14:52.233 13:48:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:14:52.233 13:48:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:52.233 13:48:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.233 13:48:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.233 ************************************ 00:14:52.233 START TEST nvme_identify 00:14:52.234 ************************************ 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:14:52.234 13:48:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:14:52.234 13:48:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:14:52.234 13:48:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:14:52.234 13:48:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:52.234 13:48:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:14:52.234 13:48:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:14:52.234 13:48:39 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:52.234 13:48:39 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:14:52.496 [2024-11-04 13:48:39.329127] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64986 terminated unexpected 00:14:52.496 ===================================================== 00:14:52.496 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:52.496 ===================================================== 00:14:52.496 Controller Capabilities/Features 00:14:52.496 ================================ 00:14:52.496 Vendor ID: 1b36 00:14:52.496 Subsystem Vendor ID: 1af4 00:14:52.496 Serial Number: 12340 00:14:52.496 Model Number: QEMU NVMe Ctrl 00:14:52.496 Firmware Version: 8.0.0 00:14:52.496 Recommended Arb Burst: 6 00:14:52.496 IEEE OUI Identifier: 00 54 52 00:14:52.496 Multi-path I/O 00:14:52.496 May have multiple subsystem ports: No 00:14:52.496 May have multiple controllers: No 00:14:52.496 Associated with SR-IOV VF: No 00:14:52.496 Max Data Transfer Size: 524288 00:14:52.496 Max Number of Namespaces: 256 00:14:52.496 Max Number of I/O Queues: 64 00:14:52.496 NVMe Specification Version (VS): 1.4 00:14:52.496 NVMe Specification Version (Identify): 1.4 00:14:52.496 Maximum Queue Entries: 2048 00:14:52.496 Contiguous Queues Required: Yes 00:14:52.496 Arbitration Mechanisms Supported 00:14:52.496 Weighted Round Robin: Not Supported 00:14:52.496 Vendor Specific: Not Supported 00:14:52.496 Reset Timeout: 7500 ms 00:14:52.496 Doorbell Stride: 4 bytes 00:14:52.496 NVM Subsystem Reset: Not Supported 00:14:52.496 Command Sets Supported 00:14:52.496 NVM Command Set: Supported 00:14:52.496 Boot Partition: Not Supported 00:14:52.496 Memory Page Size Minimum: 4096 bytes 00:14:52.496 Memory Page Size Maximum: 65536 bytes 00:14:52.496 Persistent Memory Region: Not Supported 00:14:52.496 Optional Asynchronous Events Supported 00:14:52.496 Namespace Attribute Notices: Supported 00:14:52.496 Firmware Activation Notices: Not Supported 00:14:52.496 ANA Change Notices: Not Supported 00:14:52.496 PLE Aggregate Log Change Notices: Not Supported 00:14:52.496 LBA Status Info Alert Notices: Not Supported 00:14:52.496 EGE Aggregate Log Change Notices: Not Supported 00:14:52.496 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.496 Zone Descriptor Change Notices: Not Supported 00:14:52.496 Discovery Log Change Notices: Not Supported 00:14:52.496 Controller Attributes 00:14:52.496 128-bit Host Identifier: Not Supported 00:14:52.496 Non-Operational Permissive Mode: Not Supported 00:14:52.496 NVM Sets: Not Supported 00:14:52.496 Read Recovery Levels: Not Supported 00:14:52.496 Endurance Groups: Not Supported 00:14:52.496 Predictable Latency Mode: Not Supported 00:14:52.496 Traffic Based Keep ALive: Not Supported 00:14:52.496 Namespace Granularity: Not Supported 00:14:52.496 SQ Associations: Not Supported 00:14:52.496 UUID List: Not Supported 00:14:52.496 Multi-Domain Subsystem: Not Supported 00:14:52.496 Fixed Capacity Management: Not Supported 00:14:52.496 Variable Capacity Management: Not Supported 00:14:52.496 Delete Endurance Group: Not Supported 00:14:52.496 Delete NVM Set: Not Supported 00:14:52.496 Extended LBA Formats Supported: Supported 00:14:52.496 Flexible Data Placement Supported: Not Supported 00:14:52.496 00:14:52.496 Controller Memory Buffer Support 00:14:52.496 ================================ 00:14:52.496 Supported: No 00:14:52.496 00:14:52.496 Persistent Memory Region Support 00:14:52.496 ================================ 00:14:52.496 Supported: No 00:14:52.496 00:14:52.496 Admin Command Set Attributes 00:14:52.496 ============================ 00:14:52.496 Security Send/Receive: Not Supported 00:14:52.496 Format NVM: Supported 00:14:52.496 Firmware Activate/Download: Not Supported 00:14:52.496 Namespace Management: Supported 00:14:52.496 Device Self-Test: Not Supported 00:14:52.496 Directives: Supported 00:14:52.496 NVMe-MI: Not Supported 00:14:52.496 Virtualization Management: Not Supported 00:14:52.496 Doorbell Buffer Config: Supported 00:14:52.496 Get LBA Status Capability: Not Supported 00:14:52.496 Command & Feature Lockdown Capability: Not Supported 00:14:52.496 Abort Command Limit: 4 00:14:52.496 Async Event Request Limit: 4 00:14:52.496 Number of Firmware Slots: N/A 00:14:52.496 Firmware Slot 1 Read-Only: N/A 00:14:52.496 Firmware Activation Without Reset: N/A 00:14:52.496 Multiple Update Detection Support: N/A 00:14:52.496 Firmware Update Granularity: No Information Provided 00:14:52.496 Per-Namespace SMART Log: Yes 00:14:52.496 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.496 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:52.496 Command Effects Log Page: Supported 00:14:52.496 Get Log Page Extended Data: Supported 00:14:52.496 Telemetry Log Pages: Not Supported 00:14:52.496 Persistent Event Log Pages: Not Supported 00:14:52.496 Supported Log Pages Log Page: May Support 00:14:52.496 Commands Supported & Effects Log Page: Not Supported 00:14:52.496 Feature Identifiers & Effects Log Page:May Support 00:14:52.496 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.496 Data Area 4 for Telemetry Log: Not Supported 00:14:52.496 Error Log Page Entries Supported: 1 00:14:52.496 Keep Alive: Not Supported 00:14:52.496 00:14:52.496 NVM Command Set Attributes 00:14:52.496 ========================== 00:14:52.496 Submission Queue Entry Size 00:14:52.496 Max: 64 00:14:52.496 Min: 64 00:14:52.496 Completion Queue Entry Size 00:14:52.496 Max: 16 00:14:52.496 Min: 16 00:14:52.496 Number of Namespaces: 256 00:14:52.496 Compare Command: Supported 00:14:52.496 Write Uncorrectable Command: Not Supported 00:14:52.496 Dataset Management Command: Supported 00:14:52.496 Write Zeroes Command: Supported 00:14:52.496 Set Features Save Field: Supported 00:14:52.496 Reservations: Not Supported 00:14:52.496 Timestamp: Supported 00:14:52.496 Copy: Supported 00:14:52.496 Volatile Write Cache: Present 00:14:52.496 Atomic Write Unit (Normal): 1 00:14:52.496 Atomic Write Unit (PFail): 1 00:14:52.496 Atomic Compare & Write Unit: 1 00:14:52.496 Fused Compare & Write: Not Supported 00:14:52.496 Scatter-Gather List 00:14:52.496 SGL Command Set: Supported 00:14:52.496 SGL Keyed: Not Supported 00:14:52.496 SGL Bit Bucket Descriptor: Not Supported 00:14:52.496 SGL Metadata Pointer: Not Supported 00:14:52.496 Oversized SGL: Not Supported 00:14:52.496 SGL Metadata Address: Not Supported 00:14:52.496 SGL Offset: Not Supported 00:14:52.496 Transport SGL Data Block: Not Supported 00:14:52.496 Replay Protected Memory Block: Not Supported 00:14:52.496 00:14:52.496 Firmware Slot Information 00:14:52.496 ========================= 00:14:52.496 Active slot: 1 00:14:52.496 Slot 1 Firmware Revision: 1.0 00:14:52.496 00:14:52.496 00:14:52.496 Commands Supported and Effects 00:14:52.496 ============================== 00:14:52.496 Admin Commands 00:14:52.496 -------------- 00:14:52.496 Delete I/O Submission Queue (00h): Supported 00:14:52.496 Create I/O Submission Queue (01h): Supported 00:14:52.496 Get Log Page (02h): Supported 00:14:52.496 Delete I/O Completion Queue (04h): Supported 00:14:52.496 Create I/O Completion Queue (05h): Supported 00:14:52.496 Identify (06h): Supported 00:14:52.496 Abort (08h): Supported 00:14:52.496 Set Features (09h): Supported 00:14:52.496 Get Features (0Ah): Supported 00:14:52.496 Asynchronous Event Request (0Ch): Supported 00:14:52.497 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:52.497 Directive Send (19h): Supported 00:14:52.497 Directive Receive (1Ah): Supported 00:14:52.497 Virtualization Management (1Ch): Supported 00:14:52.497 Doorbell Buffer Config (7Ch): Supported 00:14:52.497 Format NVM (80h): Supported LBA-Change 00:14:52.497 I/O Commands 00:14:52.497 ------------ 00:14:52.497 Flush (00h): Supported LBA-Change 00:14:52.497 Write (01h): Supported LBA-Change 00:14:52.497 Read (02h): Supported 00:14:52.497 Compare (05h): Supported 00:14:52.497 Write Zeroes (08h): Supported LBA-Change 00:14:52.497 Dataset Management (09h): Supported LBA-Change 00:14:52.497 Unknown (0Ch): Supported 00:14:52.497 Unknown (12h): Supported 00:14:52.497 Copy (19h): Supported LBA-Change 00:14:52.497 Unknown (1Dh): Supported LBA-Change 00:14:52.497 00:14:52.497 Error Log 00:14:52.497 ========= 00:14:52.497 00:14:52.497 Arbitration 00:14:52.497 =========== 00:14:52.497 Arbitration Burst: no limit 00:14:52.497 00:14:52.497 Power Management 00:14:52.497 ================ 00:14:52.497 Number of Power States: 1 00:14:52.497 Current Power State: Power State #0 00:14:52.497 Power State #0: 00:14:52.497 Max Power: 25.00 W 00:14:52.497 Non-Operational State: Operational 00:14:52.497 Entry Latency: 16 microseconds 00:14:52.497 Exit Latency: 4 microseconds 00:14:52.497 Relative Read Throughput: 0 00:14:52.497 Relative Read Latency: 0 00:14:52.497 Relative Write Throughput: 0 00:14:52.497 Relative Write Latency: 0 00:14:52.497 Idle Power[2024-11-04 13:48:39.330557] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64986 terminated unexpected 00:14:52.497 : Not Reported 00:14:52.497 Active Power: Not Reported 00:14:52.497 Non-Operational Permissive Mode: Not Supported 00:14:52.497 00:14:52.497 Health Information 00:14:52.497 ================== 00:14:52.497 Critical Warnings: 00:14:52.497 Available Spare Space: OK 00:14:52.497 Temperature: OK 00:14:52.497 Device Reliability: OK 00:14:52.497 Read Only: No 00:14:52.497 Volatile Memory Backup: OK 00:14:52.497 Current Temperature: 323 Kelvin (50 Celsius) 00:14:52.497 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:52.497 Available Spare: 0% 00:14:52.497 Available Spare Threshold: 0% 00:14:52.497 Life Percentage Used: 0% 00:14:52.497 Data Units Read: 671 00:14:52.497 Data Units Written: 599 00:14:52.497 Host Read Commands: 31603 00:14:52.497 Host Write Commands: 31389 00:14:52.497 Controller Busy Time: 0 minutes 00:14:52.497 Power Cycles: 0 00:14:52.497 Power On Hours: 0 hours 00:14:52.497 Unsafe Shutdowns: 0 00:14:52.497 Unrecoverable Media Errors: 0 00:14:52.497 Lifetime Error Log Entries: 0 00:14:52.497 Warning Temperature Time: 0 minutes 00:14:52.497 Critical Temperature Time: 0 minutes 00:14:52.497 00:14:52.497 Number of Queues 00:14:52.497 ================ 00:14:52.497 Number of I/O Submission Queues: 64 00:14:52.497 Number of I/O Completion Queues: 64 00:14:52.497 00:14:52.497 ZNS Specific Controller Data 00:14:52.497 ============================ 00:14:52.497 Zone Append Size Limit: 0 00:14:52.497 00:14:52.497 00:14:52.497 Active Namespaces 00:14:52.497 ================= 00:14:52.497 Namespace ID:1 00:14:52.497 Error Recovery Timeout: Unlimited 00:14:52.497 Command Set Identifier: NVM (00h) 00:14:52.497 Deallocate: Supported 00:14:52.497 Deallocated/Unwritten Error: Supported 00:14:52.497 Deallocated Read Value: All 0x00 00:14:52.497 Deallocate in Write Zeroes: Not Supported 00:14:52.497 Deallocated Guard Field: 0xFFFF 00:14:52.497 Flush: Supported 00:14:52.497 Reservation: Not Supported 00:14:52.497 Metadata Transferred as: Separate Metadata Buffer 00:14:52.497 Namespace Sharing Capabilities: Private 00:14:52.497 Size (in LBAs): 1548666 (5GiB) 00:14:52.497 Capacity (in LBAs): 1548666 (5GiB) 00:14:52.497 Utilization (in LBAs): 1548666 (5GiB) 00:14:52.497 Thin Provisioning: Not Supported 00:14:52.497 Per-NS Atomic Units: No 00:14:52.497 Maximum Single Source Range Length: 128 00:14:52.497 Maximum Copy Length: 128 00:14:52.497 Maximum Source Range Count: 128 00:14:52.497 NGUID/EUI64 Never Reused: No 00:14:52.497 Namespace Write Protected: No 00:14:52.497 Number of LBA Formats: 8 00:14:52.497 Current LBA Format: LBA Format #07 00:14:52.497 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.497 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.497 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.497 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.497 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.497 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.497 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.497 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.497 00:14:52.497 NVM Specific Namespace Data 00:14:52.497 =========================== 00:14:52.497 Logical Block Storage Tag Mask: 0 00:14:52.497 Protection Information Capabilities: 00:14:52.497 16b Guard Protection Information Storage Tag Support: No 00:14:52.497 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.497 Storage Tag Check Read Support: No 00:14:52.497 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.497 ===================================================== 00:14:52.497 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:52.497 ===================================================== 00:14:52.497 Controller Capabilities/Features 00:14:52.497 ================================ 00:14:52.497 Vendor ID: 1b36 00:14:52.497 Subsystem Vendor ID: 1af4 00:14:52.497 Serial Number: 12341 00:14:52.497 Model Number: QEMU NVMe Ctrl 00:14:52.497 Firmware Version: 8.0.0 00:14:52.497 Recommended Arb Burst: 6 00:14:52.497 IEEE OUI Identifier: 00 54 52 00:14:52.497 Multi-path I/O 00:14:52.497 May have multiple subsystem ports: No 00:14:52.497 May have multiple controllers: No 00:14:52.497 Associated with SR-IOV VF: No 00:14:52.497 Max Data Transfer Size: 524288 00:14:52.497 Max Number of Namespaces: 256 00:14:52.497 Max Number of I/O Queues: 64 00:14:52.497 NVMe Specification Version (VS): 1.4 00:14:52.497 NVMe Specification Version (Identify): 1.4 00:14:52.497 Maximum Queue Entries: 2048 00:14:52.497 Contiguous Queues Required: Yes 00:14:52.497 Arbitration Mechanisms Supported 00:14:52.497 Weighted Round Robin: Not Supported 00:14:52.497 Vendor Specific: Not Supported 00:14:52.497 Reset Timeout: 7500 ms 00:14:52.497 Doorbell Stride: 4 bytes 00:14:52.497 NVM Subsystem Reset: Not Supported 00:14:52.497 Command Sets Supported 00:14:52.497 NVM Command Set: Supported 00:14:52.497 Boot Partition: Not Supported 00:14:52.497 Memory Page Size Minimum: 4096 bytes 00:14:52.497 Memory Page Size Maximum: 65536 bytes 00:14:52.497 Persistent Memory Region: Not Supported 00:14:52.497 Optional Asynchronous Events Supported 00:14:52.497 Namespace Attribute Notices: Supported 00:14:52.497 Firmware Activation Notices: Not Supported 00:14:52.497 ANA Change Notices: Not Supported 00:14:52.497 PLE Aggregate Log Change Notices: Not Supported 00:14:52.497 LBA Status Info Alert Notices: Not Supported 00:14:52.497 EGE Aggregate Log Change Notices: Not Supported 00:14:52.497 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.497 Zone Descriptor Change Notices: Not Supported 00:14:52.497 Discovery Log Change Notices: Not Supported 00:14:52.497 Controller Attributes 00:14:52.497 128-bit Host Identifier: Not Supported 00:14:52.497 Non-Operational Permissive Mode: Not Supported 00:14:52.497 NVM Sets: Not Supported 00:14:52.497 Read Recovery Levels: Not Supported 00:14:52.497 Endurance Groups: Not Supported 00:14:52.497 Predictable Latency Mode: Not Supported 00:14:52.497 Traffic Based Keep ALive: Not Supported 00:14:52.497 Namespace Granularity: Not Supported 00:14:52.497 SQ Associations: Not Supported 00:14:52.497 UUID List: Not Supported 00:14:52.497 Multi-Domain Subsystem: Not Supported 00:14:52.497 Fixed Capacity Management: Not Supported 00:14:52.497 Variable Capacity Management: Not Supported 00:14:52.497 Delete Endurance Group: Not Supported 00:14:52.497 Delete NVM Set: Not Supported 00:14:52.497 Extended LBA Formats Supported: Supported 00:14:52.497 Flexible Data Placement Supported: Not Supported 00:14:52.497 00:14:52.497 Controller Memory Buffer Support 00:14:52.497 ================================ 00:14:52.497 Supported: No 00:14:52.497 00:14:52.497 Persistent Memory Region Support 00:14:52.497 ================================ 00:14:52.498 Supported: No 00:14:52.498 00:14:52.498 Admin Command Set Attributes 00:14:52.498 ============================ 00:14:52.498 Security Send/Receive: Not Supported 00:14:52.498 Format NVM: Supported 00:14:52.498 Firmware Activate/Download: Not Supported 00:14:52.498 Namespace Management: Supported 00:14:52.498 Device Self-Test: Not Supported 00:14:52.498 Directives: Supported 00:14:52.498 NVMe-MI: Not Supported 00:14:52.498 Virtualization Management: Not Supported 00:14:52.498 Doorbell Buffer Config: Supported 00:14:52.498 Get LBA Status Capability: Not Supported 00:14:52.498 Command & Feature Lockdown Capability: Not Supported 00:14:52.498 Abort Command Limit: 4 00:14:52.498 Async Event Request Limit: 4 00:14:52.498 Number of Firmware Slots: N/A 00:14:52.498 Firmware Slot 1 Read-Only: N/A 00:14:52.498 Firmware Activation Without Reset: N/A 00:14:52.498 Multiple Update Detection Support: N/A 00:14:52.498 Firmware Update Granularity: No Information Provided 00:14:52.498 Per-Namespace SMART Log: Yes 00:14:52.498 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.498 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:52.498 Command Effects Log Page: Supported 00:14:52.498 Get Log Page Extended Data: Supported 00:14:52.498 Telemetry Log Pages: Not Supported 00:14:52.498 Persistent Event Log Pages: Not Supported 00:14:52.498 Supported Log Pages Log Page: May Support 00:14:52.498 Commands Supported & Effects Log Page: Not Supported 00:14:52.498 Feature Identifiers & Effects Log Page:May Support 00:14:52.498 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.498 Data Area 4 for Telemetry Log: Not Supported 00:14:52.498 Error Log Page Entries Supported: 1 00:14:52.498 Keep Alive: Not Supported 00:14:52.498 00:14:52.498 NVM Command Set Attributes 00:14:52.498 ========================== 00:14:52.498 Submission Queue Entry Size 00:14:52.498 Max: 64 00:14:52.498 Min: 64 00:14:52.498 Completion Queue Entry Size 00:14:52.498 Max: 16 00:14:52.498 Min: 16 00:14:52.498 Number of Namespaces: 256 00:14:52.498 Compare Command: Supported 00:14:52.498 Write Uncorrectable Command: Not Supported 00:14:52.498 Dataset Management Command: Supported 00:14:52.498 Write Zeroes Command: Supported 00:14:52.498 Set Features Save Field: Supported 00:14:52.498 Reservations: Not Supported 00:14:52.498 Timestamp: Supported 00:14:52.498 Copy: Supported 00:14:52.498 Volatile Write Cache: Present 00:14:52.498 Atomic Write Unit (Normal): 1 00:14:52.498 Atomic Write Unit (PFail): 1 00:14:52.498 Atomic Compare & Write Unit: 1 00:14:52.498 Fused Compare & Write: Not Supported 00:14:52.498 Scatter-Gather List 00:14:52.498 SGL Command Set: Supported 00:14:52.498 SGL Keyed: Not Supported 00:14:52.498 SGL Bit Bucket Descriptor: Not Supported 00:14:52.498 SGL Metadata Pointer: Not Supported 00:14:52.498 Oversized SGL: Not Supported 00:14:52.498 SGL Metadata Address: Not Supported 00:14:52.498 SGL Offset: Not Supported 00:14:52.498 Transport SGL Data Block: Not Supported 00:14:52.498 Replay Protected Memory Block: Not Supported 00:14:52.498 00:14:52.498 Firmware Slot Information 00:14:52.498 ========================= 00:14:52.498 Active slot: 1 00:14:52.498 Slot 1 Firmware Revision: 1.0 00:14:52.498 00:14:52.498 00:14:52.498 Commands Supported and Effects 00:14:52.498 ============================== 00:14:52.498 Admin Commands 00:14:52.498 -------------- 00:14:52.498 Delete I/O Submission Queue (00h): Supported 00:14:52.498 Create I/O Submission Queue (01h): Supported 00:14:52.498 Get Log Page (02h): Supported 00:14:52.498 Delete I/O Completion Queue (04h): Supported 00:14:52.498 Create I/O Completion Queue (05h): Supported 00:14:52.498 Identify (06h): Supported 00:14:52.498 Abort (08h): Supported 00:14:52.498 Set Features (09h): Supported 00:14:52.498 Get Features (0Ah): Supported 00:14:52.498 Asynchronous Event Request (0Ch): Supported 00:14:52.498 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:52.498 Directive Send (19h): Supported 00:14:52.498 Directive Receive (1Ah): Supported 00:14:52.498 Virtualization Management (1Ch): Supported 00:14:52.498 Doorbell Buffer Config (7Ch): Supported 00:14:52.498 Format NVM (80h): Supported LBA-Change 00:14:52.498 I/O Commands 00:14:52.498 ------------ 00:14:52.498 Flush (00h): Supported LBA-Change 00:14:52.498 Write (01h): Supported LBA-Change 00:14:52.498 Read (02h): Supported 00:14:52.498 Compare (05h): Supported 00:14:52.498 Write Zeroes (08h): Supported LBA-Change 00:14:52.498 Dataset Management (09h): Supported LBA-Change 00:14:52.498 Unknown (0Ch): Supported 00:14:52.498 Unknown (12h): Supported 00:14:52.498 Copy (19h): Supported LBA-Change 00:14:52.498 Unknown (1Dh): Supported LBA-Change 00:14:52.498 00:14:52.498 Error Log 00:14:52.498 ========= 00:14:52.498 00:14:52.498 Arbitration 00:14:52.498 =========== 00:14:52.498 Arbitration Burst: no limit 00:14:52.498 00:14:52.498 Power Management 00:14:52.498 ================ 00:14:52.498 Number of Power States: 1 00:14:52.498 Current Power State: Power State #0 00:14:52.498 Power State #0: 00:14:52.498 Max Power: 25.00 W 00:14:52.498 Non-Operational State: Operational 00:14:52.498 Entry Latency: 16 microseconds 00:14:52.498 Exit Latency: 4 microseconds 00:14:52.498 Relative Read Throughput: 0 00:14:52.498 Relative Read Latency: 0 00:14:52.498 Relative Write Throughput: 0 00:14:52.498 Relative Write Latency: 0 00:14:52.498 Idle Power: Not Reported 00:14:52.498 Active Power: Not Reported 00:14:52.498 Non-Operational Permissive Mode: Not Supported 00:14:52.498 00:14:52.498 Health Information 00:14:52.498 ================== 00:14:52.498 Critical Warnings: 00:14:52.498 Available Spare Space: OK 00:14:52.498 Temperature: [2024-11-04 13:48:39.331647] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64986 terminated unexpected 00:14:52.498 OK 00:14:52.498 Device Reliability: OK 00:14:52.498 Read Only: No 00:14:52.498 Volatile Memory Backup: OK 00:14:52.498 Current Temperature: 323 Kelvin (50 Celsius) 00:14:52.498 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:52.498 Available Spare: 0% 00:14:52.498 Available Spare Threshold: 0% 00:14:52.498 Life Percentage Used: 0% 00:14:52.498 Data Units Read: 1039 00:14:52.498 Data Units Written: 900 00:14:52.498 Host Read Commands: 47296 00:14:52.498 Host Write Commands: 45978 00:14:52.498 Controller Busy Time: 0 minutes 00:14:52.498 Power Cycles: 0 00:14:52.498 Power On Hours: 0 hours 00:14:52.498 Unsafe Shutdowns: 0 00:14:52.498 Unrecoverable Media Errors: 0 00:14:52.498 Lifetime Error Log Entries: 0 00:14:52.498 Warning Temperature Time: 0 minutes 00:14:52.498 Critical Temperature Time: 0 minutes 00:14:52.498 00:14:52.498 Number of Queues 00:14:52.498 ================ 00:14:52.498 Number of I/O Submission Queues: 64 00:14:52.498 Number of I/O Completion Queues: 64 00:14:52.498 00:14:52.498 ZNS Specific Controller Data 00:14:52.498 ============================ 00:14:52.498 Zone Append Size Limit: 0 00:14:52.498 00:14:52.498 00:14:52.498 Active Namespaces 00:14:52.498 ================= 00:14:52.498 Namespace ID:1 00:14:52.498 Error Recovery Timeout: Unlimited 00:14:52.498 Command Set Identifier: NVM (00h) 00:14:52.498 Deallocate: Supported 00:14:52.498 Deallocated/Unwritten Error: Supported 00:14:52.498 Deallocated Read Value: All 0x00 00:14:52.498 Deallocate in Write Zeroes: Not Supported 00:14:52.498 Deallocated Guard Field: 0xFFFF 00:14:52.498 Flush: Supported 00:14:52.498 Reservation: Not Supported 00:14:52.498 Namespace Sharing Capabilities: Private 00:14:52.498 Size (in LBAs): 1310720 (5GiB) 00:14:52.498 Capacity (in LBAs): 1310720 (5GiB) 00:14:52.498 Utilization (in LBAs): 1310720 (5GiB) 00:14:52.498 Thin Provisioning: Not Supported 00:14:52.498 Per-NS Atomic Units: No 00:14:52.498 Maximum Single Source Range Length: 128 00:14:52.498 Maximum Copy Length: 128 00:14:52.498 Maximum Source Range Count: 128 00:14:52.498 NGUID/EUI64 Never Reused: No 00:14:52.498 Namespace Write Protected: No 00:14:52.499 Number of LBA Formats: 8 00:14:52.499 Current LBA Format: LBA Format #04 00:14:52.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.499 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.499 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.499 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.499 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.499 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.499 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.499 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.499 00:14:52.499 NVM Specific Namespace Data 00:14:52.499 =========================== 00:14:52.499 Logical Block Storage Tag Mask: 0 00:14:52.499 Protection Information Capabilities: 00:14:52.499 16b Guard Protection Information Storage Tag Support: No 00:14:52.499 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.499 Storage Tag Check Read Support: No 00:14:52.499 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.499 ===================================================== 00:14:52.499 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:52.499 ===================================================== 00:14:52.499 Controller Capabilities/Features 00:14:52.499 ================================ 00:14:52.499 Vendor ID: 1b36 00:14:52.499 Subsystem Vendor ID: 1af4 00:14:52.499 Serial Number: 12343 00:14:52.499 Model Number: QEMU NVMe Ctrl 00:14:52.499 Firmware Version: 8.0.0 00:14:52.499 Recommended Arb Burst: 6 00:14:52.499 IEEE OUI Identifier: 00 54 52 00:14:52.499 Multi-path I/O 00:14:52.499 May have multiple subsystem ports: No 00:14:52.499 May have multiple controllers: Yes 00:14:52.499 Associated with SR-IOV VF: No 00:14:52.499 Max Data Transfer Size: 524288 00:14:52.499 Max Number of Namespaces: 256 00:14:52.499 Max Number of I/O Queues: 64 00:14:52.499 NVMe Specification Version (VS): 1.4 00:14:52.499 NVMe Specification Version (Identify): 1.4 00:14:52.499 Maximum Queue Entries: 2048 00:14:52.499 Contiguous Queues Required: Yes 00:14:52.499 Arbitration Mechanisms Supported 00:14:52.499 Weighted Round Robin: Not Supported 00:14:52.499 Vendor Specific: Not Supported 00:14:52.499 Reset Timeout: 7500 ms 00:14:52.499 Doorbell Stride: 4 bytes 00:14:52.499 NVM Subsystem Reset: Not Supported 00:14:52.499 Command Sets Supported 00:14:52.499 NVM Command Set: Supported 00:14:52.499 Boot Partition: Not Supported 00:14:52.499 Memory Page Size Minimum: 4096 bytes 00:14:52.499 Memory Page Size Maximum: 65536 bytes 00:14:52.499 Persistent Memory Region: Not Supported 00:14:52.499 Optional Asynchronous Events Supported 00:14:52.499 Namespace Attribute Notices: Supported 00:14:52.499 Firmware Activation Notices: Not Supported 00:14:52.499 ANA Change Notices: Not Supported 00:14:52.499 PLE Aggregate Log Change Notices: Not Supported 00:14:52.499 LBA Status Info Alert Notices: Not Supported 00:14:52.499 EGE Aggregate Log Change Notices: Not Supported 00:14:52.499 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.499 Zone Descriptor Change Notices: Not Supported 00:14:52.499 Discovery Log Change Notices: Not Supported 00:14:52.499 Controller Attributes 00:14:52.499 128-bit Host Identifier: Not Supported 00:14:52.499 Non-Operational Permissive Mode: Not Supported 00:14:52.499 NVM Sets: Not Supported 00:14:52.499 Read Recovery Levels: Not Supported 00:14:52.499 Endurance Groups: Supported 00:14:52.499 Predictable Latency Mode: Not Supported 00:14:52.499 Traffic Based Keep ALive: Not Supported 00:14:52.499 Namespace Granularity: Not Supported 00:14:52.499 SQ Associations: Not Supported 00:14:52.499 UUID List: Not Supported 00:14:52.499 Multi-Domain Subsystem: Not Supported 00:14:52.499 Fixed Capacity Management: Not Supported 00:14:52.499 Variable Capacity Management: Not Supported 00:14:52.499 Delete Endurance Group: Not Supported 00:14:52.499 Delete NVM Set: Not Supported 00:14:52.499 Extended LBA Formats Supported: Supported 00:14:52.499 Flexible Data Placement Supported: Supported 00:14:52.499 00:14:52.499 Controller Memory Buffer Support 00:14:52.499 ================================ 00:14:52.499 Supported: No 00:14:52.499 00:14:52.499 Persistent Memory Region Support 00:14:52.499 ================================ 00:14:52.499 Supported: No 00:14:52.499 00:14:52.499 Admin Command Set Attributes 00:14:52.499 ============================ 00:14:52.499 Security Send/Receive: Not Supported 00:14:52.499 Format NVM: Supported 00:14:52.499 Firmware Activate/Download: Not Supported 00:14:52.499 Namespace Management: Supported 00:14:52.499 Device Self-Test: Not Supported 00:14:52.499 Directives: Supported 00:14:52.499 NVMe-MI: Not Supported 00:14:52.499 Virtualization Management: Not Supported 00:14:52.499 Doorbell Buffer Config: Supported 00:14:52.499 Get LBA Status Capability: Not Supported 00:14:52.499 Command & Feature Lockdown Capability: Not Supported 00:14:52.499 Abort Command Limit: 4 00:14:52.499 Async Event Request Limit: 4 00:14:52.499 Number of Firmware Slots: N/A 00:14:52.499 Firmware Slot 1 Read-Only: N/A 00:14:52.499 Firmware Activation Without Reset: N/A 00:14:52.499 Multiple Update Detection Support: N/A 00:14:52.499 Firmware Update Granularity: No Information Provided 00:14:52.499 Per-Namespace SMART Log: Yes 00:14:52.499 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.499 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:52.499 Command Effects Log Page: Supported 00:14:52.499 Get Log Page Extended Data: Supported 00:14:52.499 Telemetry Log Pages: Not Supported 00:14:52.499 Persistent Event Log Pages: Not Supported 00:14:52.499 Supported Log Pages Log Page: May Support 00:14:52.499 Commands Supported & Effects Log Page: Not Supported 00:14:52.499 Feature Identifiers & Effects Log Page:May Support 00:14:52.499 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.499 Data Area 4 for Telemetry Log: Not Supported 00:14:52.499 Error Log Page Entries Supported: 1 00:14:52.499 Keep Alive: Not Supported 00:14:52.499 00:14:52.499 NVM Command Set Attributes 00:14:52.499 ========================== 00:14:52.499 Submission Queue Entry Size 00:14:52.499 Max: 64 00:14:52.499 Min: 64 00:14:52.499 Completion Queue Entry Size 00:14:52.499 Max: 16 00:14:52.499 Min: 16 00:14:52.499 Number of Namespaces: 256 00:14:52.499 Compare Command: Supported 00:14:52.499 Write Uncorrectable Command: Not Supported 00:14:52.499 Dataset Management Command: Supported 00:14:52.499 Write Zeroes Command: Supported 00:14:52.499 Set Features Save Field: Supported 00:14:52.499 Reservations: Not Supported 00:14:52.499 Timestamp: Supported 00:14:52.499 Copy: Supported 00:14:52.499 Volatile Write Cache: Present 00:14:52.499 Atomic Write Unit (Normal): 1 00:14:52.499 Atomic Write Unit (PFail): 1 00:14:52.499 Atomic Compare & Write Unit: 1 00:14:52.499 Fused Compare & Write: Not Supported 00:14:52.499 Scatter-Gather List 00:14:52.499 SGL Command Set: Supported 00:14:52.499 SGL Keyed: Not Supported 00:14:52.499 SGL Bit Bucket Descriptor: Not Supported 00:14:52.499 SGL Metadata Pointer: Not Supported 00:14:52.499 Oversized SGL: Not Supported 00:14:52.499 SGL Metadata Address: Not Supported 00:14:52.499 SGL Offset: Not Supported 00:14:52.499 Transport SGL Data Block: Not Supported 00:14:52.499 Replay Protected Memory Block: Not Supported 00:14:52.499 00:14:52.499 Firmware Slot Information 00:14:52.499 ========================= 00:14:52.499 Active slot: 1 00:14:52.499 Slot 1 Firmware Revision: 1.0 00:14:52.499 00:14:52.499 00:14:52.499 Commands Supported and Effects 00:14:52.499 ============================== 00:14:52.499 Admin Commands 00:14:52.499 -------------- 00:14:52.499 Delete I/O Submission Queue (00h): Supported 00:14:52.499 Create I/O Submission Queue (01h): Supported 00:14:52.499 Get Log Page (02h): Supported 00:14:52.499 Delete I/O Completion Queue (04h): Supported 00:14:52.499 Create I/O Completion Queue (05h): Supported 00:14:52.499 Identify (06h): Supported 00:14:52.499 Abort (08h): Supported 00:14:52.499 Set Features (09h): Supported 00:14:52.499 Get Features (0Ah): Supported 00:14:52.499 Asynchronous Event Request (0Ch): Supported 00:14:52.499 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:52.499 Directive Send (19h): Supported 00:14:52.499 Directive Receive (1Ah): Supported 00:14:52.499 Virtualization Management (1Ch): Supported 00:14:52.499 Doorbell Buffer Config (7Ch): Supported 00:14:52.500 Format NVM (80h): Supported LBA-Change 00:14:52.500 I/O Commands 00:14:52.500 ------------ 00:14:52.500 Flush (00h): Supported LBA-Change 00:14:52.500 Write (01h): Supported LBA-Change 00:14:52.500 Read (02h): Supported 00:14:52.500 Compare (05h): Supported 00:14:52.500 Write Zeroes (08h): Supported LBA-Change 00:14:52.500 Dataset Management (09h): Supported LBA-Change 00:14:52.500 Unknown (0Ch): Supported 00:14:52.500 Unknown (12h): Supported 00:14:52.500 Copy (19h): Supported LBA-Change 00:14:52.500 Unknown (1Dh): Supported LBA-Change 00:14:52.500 00:14:52.500 Error Log 00:14:52.500 ========= 00:14:52.500 00:14:52.500 Arbitration 00:14:52.500 =========== 00:14:52.500 Arbitration Burst: no limit 00:14:52.500 00:14:52.500 Power Management 00:14:52.500 ================ 00:14:52.500 Number of Power States: 1 00:14:52.500 Current Power State: Power State #0 00:14:52.500 Power State #0: 00:14:52.500 Max Power: 25.00 W 00:14:52.500 Non-Operational State: Operational 00:14:52.500 Entry Latency: 16 microseconds 00:14:52.500 Exit Latency: 4 microseconds 00:14:52.500 Relative Read Throughput: 0 00:14:52.500 Relative Read Latency: 0 00:14:52.500 Relative Write Throughput: 0 00:14:52.500 Relative Write Latency: 0 00:14:52.500 Idle Power: Not Reported 00:14:52.500 Active Power: Not Reported 00:14:52.500 Non-Operational Permissive Mode: Not Supported 00:14:52.500 00:14:52.500 Health Information 00:14:52.500 ================== 00:14:52.500 Critical Warnings: 00:14:52.500 Available Spare Space: OK 00:14:52.500 Temperature: OK 00:14:52.500 Device Reliability: OK 00:14:52.500 Read Only: No 00:14:52.500 Volatile Memory Backup: OK 00:14:52.500 Current Temperature: 323 Kelvin (50 Celsius) 00:14:52.500 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:52.500 Available Spare: 0% 00:14:52.500 Available Spare Threshold: 0% 00:14:52.500 Life Percentage Used: 0% 00:14:52.500 Data Units Read: 762 00:14:52.500 Data Units Written: 691 00:14:52.500 Host Read Commands: 32552 00:14:52.500 Host Write Commands: 31975 00:14:52.500 Controller Busy Time: 0 minutes 00:14:52.500 Power Cycles: 0 00:14:52.500 Power On Hours: 0 hours 00:14:52.500 Unsafe Shutdowns: 0 00:14:52.500 Unrecoverable Media Errors: 0 00:14:52.500 Lifetime Error Log Entries: 0 00:14:52.500 Warning Temperature Time: 0 minutes 00:14:52.500 Critical Temperature Time: 0 minutes 00:14:52.500 00:14:52.500 Number of Queues 00:14:52.500 ================ 00:14:52.500 Number of I/O Submission Queues: 64 00:14:52.500 Number of I/O Completion Queues: 64 00:14:52.500 00:14:52.500 ZNS Specific Controller Data 00:14:52.500 ============================ 00:14:52.500 Zone Append Size Limit: 0 00:14:52.500 00:14:52.500 00:14:52.500 Active Namespaces 00:14:52.500 ================= 00:14:52.500 Namespace ID:1 00:14:52.500 Error Recovery Timeout: Unlimited 00:14:52.500 Command Set Identifier: NVM (00h) 00:14:52.500 Deallocate: Supported 00:14:52.500 Deallocated/Unwritten Error: Supported 00:14:52.500 Deallocated Read Value: All 0x00 00:14:52.500 Deallocate in Write Zeroes: Not Supported 00:14:52.500 Deallocated Guard Field: 0xFFFF 00:14:52.500 Flush: Supported 00:14:52.500 Reservation: Not Supported 00:14:52.500 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.500 Size (in LBAs): 262144 (1GiB) 00:14:52.500 Capacity (in LBAs): 262144 (1GiB) 00:14:52.500 Utilization (in LBAs): 262144 (1GiB) 00:14:52.500 Thin Provisioning: Not Supported 00:14:52.500 Per-NS Atomic Units: No 00:14:52.500 Maximum Single Source Range Length: 128 00:14:52.500 Maximum Copy Length: 128 00:14:52.500 Maximum Source Range Count: 128 00:14:52.500 NGUID/EUI64 Never Reused: No 00:14:52.500 Namespace Write Protected: No 00:14:52.500 Endurance group ID: 1 00:14:52.500 Number of LBA Formats: 8 00:14:52.500 Current LBA Format: LBA Format #04 00:14:52.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.500 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.500 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.500 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.500 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.500 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.500 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.500 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.500 00:14:52.500 Get Feature FDP: 00:14:52.500 ================ 00:14:52.500 Enabled: Yes 00:14:52.500 FDP configuration index: 0 00:14:52.500 00:14:52.500 FDP configurations log page 00:14:52.500 =========================== 00:14:52.500 Number of FDP configurations: 1 00:14:52.500 Version: 0 00:14:52.500 Size: 112 00:14:52.500 FDP Configuration Descriptor: 0 00:14:52.500 Descriptor Size: 96 00:14:52.500 Reclaim Group Identifier format: 2 00:14:52.500 FDP Volatile Write Cache: Not Present 00:14:52.500 FDP Configuration: Valid 00:14:52.500 Vendor Specific Size: 0 00:14:52.500 Number of Reclaim Groups: 2 00:14:52.500 Number of Recalim Unit Handles: 8 00:14:52.500 Max Placement Identifiers: 128 00:14:52.500 Number of Namespaces Suppprted: 256 00:14:52.500 Reclaim unit Nominal Size: 6000000 bytes 00:14:52.500 Estimated Reclaim Unit Time Limit: Not Reported 00:14:52.500 RUH Desc #000: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #001: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #002: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #003: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #004: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #005: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #006: RUH Type: Initially Isolated 00:14:52.500 RUH Desc #007: RUH Type: Initially Isolated 00:14:52.500 00:14:52.500 FDP reclaim unit handle usage log page 00:14:52.500 ====================================== 00:14:52.500 Number of Reclaim Unit Handles: 8 00:14:52.500 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:52.500 RUH Usage Desc #001: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #002: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #003: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #004: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #005: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #006: RUH Attributes: Unused 00:14:52.500 RUH Usage Desc #007: RUH Attributes: Unused 00:14:52.500 00:14:52.500 FDP statistics log page 00:14:52.500 ======================= 00:14:52.500 Host bytes with metadata written: 432054272 00:14:52.500 Medi[2024-11-04 13:48:39.333451] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64986 terminated unexpected 00:14:52.500 a bytes with metadata written: 432099328 00:14:52.500 Media bytes erased: 0 00:14:52.500 00:14:52.500 FDP events log page 00:14:52.500 =================== 00:14:52.500 Number of FDP events: 0 00:14:52.500 00:14:52.500 NVM Specific Namespace Data 00:14:52.500 =========================== 00:14:52.500 Logical Block Storage Tag Mask: 0 00:14:52.500 Protection Information Capabilities: 00:14:52.500 16b Guard Protection Information Storage Tag Support: No 00:14:52.500 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.500 Storage Tag Check Read Support: No 00:14:52.500 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.500 ===================================================== 00:14:52.500 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:52.500 ===================================================== 00:14:52.500 Controller Capabilities/Features 00:14:52.500 ================================ 00:14:52.500 Vendor ID: 1b36 00:14:52.500 Subsystem Vendor ID: 1af4 00:14:52.500 Serial Number: 12342 00:14:52.500 Model Number: QEMU NVMe Ctrl 00:14:52.500 Firmware Version: 8.0.0 00:14:52.500 Recommended Arb Burst: 6 00:14:52.500 IEEE OUI Identifier: 00 54 52 00:14:52.500 Multi-path I/O 00:14:52.500 May have multiple subsystem ports: No 00:14:52.500 May have multiple controllers: No 00:14:52.500 Associated with SR-IOV VF: No 00:14:52.500 Max Data Transfer Size: 524288 00:14:52.500 Max Number of Namespaces: 256 00:14:52.500 Max Number of I/O Queues: 64 00:14:52.500 NVMe Specification Version (VS): 1.4 00:14:52.501 NVMe Specification Version (Identify): 1.4 00:14:52.501 Maximum Queue Entries: 2048 00:14:52.501 Contiguous Queues Required: Yes 00:14:52.501 Arbitration Mechanisms Supported 00:14:52.501 Weighted Round Robin: Not Supported 00:14:52.501 Vendor Specific: Not Supported 00:14:52.501 Reset Timeout: 7500 ms 00:14:52.501 Doorbell Stride: 4 bytes 00:14:52.501 NVM Subsystem Reset: Not Supported 00:14:52.501 Command Sets Supported 00:14:52.501 NVM Command Set: Supported 00:14:52.501 Boot Partition: Not Supported 00:14:52.501 Memory Page Size Minimum: 4096 bytes 00:14:52.501 Memory Page Size Maximum: 65536 bytes 00:14:52.501 Persistent Memory Region: Not Supported 00:14:52.501 Optional Asynchronous Events Supported 00:14:52.501 Namespace Attribute Notices: Supported 00:14:52.501 Firmware Activation Notices: Not Supported 00:14:52.501 ANA Change Notices: Not Supported 00:14:52.501 PLE Aggregate Log Change Notices: Not Supported 00:14:52.501 LBA Status Info Alert Notices: Not Supported 00:14:52.501 EGE Aggregate Log Change Notices: Not Supported 00:14:52.501 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.501 Zone Descriptor Change Notices: Not Supported 00:14:52.501 Discovery Log Change Notices: Not Supported 00:14:52.501 Controller Attributes 00:14:52.501 128-bit Host Identifier: Not Supported 00:14:52.501 Non-Operational Permissive Mode: Not Supported 00:14:52.501 NVM Sets: Not Supported 00:14:52.501 Read Recovery Levels: Not Supported 00:14:52.501 Endurance Groups: Not Supported 00:14:52.501 Predictable Latency Mode: Not Supported 00:14:52.501 Traffic Based Keep ALive: Not Supported 00:14:52.501 Namespace Granularity: Not Supported 00:14:52.501 SQ Associations: Not Supported 00:14:52.501 UUID List: Not Supported 00:14:52.501 Multi-Domain Subsystem: Not Supported 00:14:52.501 Fixed Capacity Management: Not Supported 00:14:52.501 Variable Capacity Management: Not Supported 00:14:52.501 Delete Endurance Group: Not Supported 00:14:52.501 Delete NVM Set: Not Supported 00:14:52.501 Extended LBA Formats Supported: Supported 00:14:52.501 Flexible Data Placement Supported: Not Supported 00:14:52.501 00:14:52.501 Controller Memory Buffer Support 00:14:52.501 ================================ 00:14:52.501 Supported: No 00:14:52.501 00:14:52.501 Persistent Memory Region Support 00:14:52.501 ================================ 00:14:52.501 Supported: No 00:14:52.501 00:14:52.501 Admin Command Set Attributes 00:14:52.501 ============================ 00:14:52.501 Security Send/Receive: Not Supported 00:14:52.501 Format NVM: Supported 00:14:52.501 Firmware Activate/Download: Not Supported 00:14:52.501 Namespace Management: Supported 00:14:52.501 Device Self-Test: Not Supported 00:14:52.501 Directives: Supported 00:14:52.501 NVMe-MI: Not Supported 00:14:52.501 Virtualization Management: Not Supported 00:14:52.501 Doorbell Buffer Config: Supported 00:14:52.501 Get LBA Status Capability: Not Supported 00:14:52.501 Command & Feature Lockdown Capability: Not Supported 00:14:52.501 Abort Command Limit: 4 00:14:52.501 Async Event Request Limit: 4 00:14:52.501 Number of Firmware Slots: N/A 00:14:52.501 Firmware Slot 1 Read-Only: N/A 00:14:52.501 Firmware Activation Without Reset: N/A 00:14:52.501 Multiple Update Detection Support: N/A 00:14:52.501 Firmware Update Granularity: No Information Provided 00:14:52.501 Per-Namespace SMART Log: Yes 00:14:52.501 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.501 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:52.501 Command Effects Log Page: Supported 00:14:52.501 Get Log Page Extended Data: Supported 00:14:52.501 Telemetry Log Pages: Not Supported 00:14:52.501 Persistent Event Log Pages: Not Supported 00:14:52.501 Supported Log Pages Log Page: May Support 00:14:52.501 Commands Supported & Effects Log Page: Not Supported 00:14:52.501 Feature Identifiers & Effects Log Page:May Support 00:14:52.501 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.501 Data Area 4 for Telemetry Log: Not Supported 00:14:52.501 Error Log Page Entries Supported: 1 00:14:52.501 Keep Alive: Not Supported 00:14:52.501 00:14:52.501 NVM Command Set Attributes 00:14:52.501 ========================== 00:14:52.501 Submission Queue Entry Size 00:14:52.501 Max: 64 00:14:52.501 Min: 64 00:14:52.501 Completion Queue Entry Size 00:14:52.501 Max: 16 00:14:52.501 Min: 16 00:14:52.501 Number of Namespaces: 256 00:14:52.501 Compare Command: Supported 00:14:52.501 Write Uncorrectable Command: Not Supported 00:14:52.501 Dataset Management Command: Supported 00:14:52.501 Write Zeroes Command: Supported 00:14:52.501 Set Features Save Field: Supported 00:14:52.501 Reservations: Not Supported 00:14:52.501 Timestamp: Supported 00:14:52.501 Copy: Supported 00:14:52.501 Volatile Write Cache: Present 00:14:52.501 Atomic Write Unit (Normal): 1 00:14:52.501 Atomic Write Unit (PFail): 1 00:14:52.501 Atomic Compare & Write Unit: 1 00:14:52.501 Fused Compare & Write: Not Supported 00:14:52.501 Scatter-Gather List 00:14:52.501 SGL Command Set: Supported 00:14:52.501 SGL Keyed: Not Supported 00:14:52.501 SGL Bit Bucket Descriptor: Not Supported 00:14:52.501 SGL Metadata Pointer: Not Supported 00:14:52.501 Oversized SGL: Not Supported 00:14:52.501 SGL Metadata Address: Not Supported 00:14:52.501 SGL Offset: Not Supported 00:14:52.501 Transport SGL Data Block: Not Supported 00:14:52.501 Replay Protected Memory Block: Not Supported 00:14:52.501 00:14:52.501 Firmware Slot Information 00:14:52.501 ========================= 00:14:52.501 Active slot: 1 00:14:52.501 Slot 1 Firmware Revision: 1.0 00:14:52.501 00:14:52.501 00:14:52.501 Commands Supported and Effects 00:14:52.501 ============================== 00:14:52.501 Admin Commands 00:14:52.501 -------------- 00:14:52.501 Delete I/O Submission Queue (00h): Supported 00:14:52.501 Create I/O Submission Queue (01h): Supported 00:14:52.501 Get Log Page (02h): Supported 00:14:52.501 Delete I/O Completion Queue (04h): Supported 00:14:52.501 Create I/O Completion Queue (05h): Supported 00:14:52.501 Identify (06h): Supported 00:14:52.501 Abort (08h): Supported 00:14:52.501 Set Features (09h): Supported 00:14:52.501 Get Features (0Ah): Supported 00:14:52.501 Asynchronous Event Request (0Ch): Supported 00:14:52.501 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:52.501 Directive Send (19h): Supported 00:14:52.501 Directive Receive (1Ah): Supported 00:14:52.501 Virtualization Management (1Ch): Supported 00:14:52.501 Doorbell Buffer Config (7Ch): Supported 00:14:52.501 Format NVM (80h): Supported LBA-Change 00:14:52.501 I/O Commands 00:14:52.501 ------------ 00:14:52.501 Flush (00h): Supported LBA-Change 00:14:52.501 Write (01h): Supported LBA-Change 00:14:52.501 Read (02h): Supported 00:14:52.501 Compare (05h): Supported 00:14:52.501 Write Zeroes (08h): Supported LBA-Change 00:14:52.501 Dataset Management (09h): Supported LBA-Change 00:14:52.501 Unknown (0Ch): Supported 00:14:52.502 Unknown (12h): Supported 00:14:52.502 Copy (19h): Supported LBA-Change 00:14:52.502 Unknown (1Dh): Supported LBA-Change 00:14:52.502 00:14:52.502 Error Log 00:14:52.502 ========= 00:14:52.502 00:14:52.502 Arbitration 00:14:52.502 =========== 00:14:52.502 Arbitration Burst: no limit 00:14:52.502 00:14:52.502 Power Management 00:14:52.502 ================ 00:14:52.502 Number of Power States: 1 00:14:52.502 Current Power State: Power State #0 00:14:52.502 Power State #0: 00:14:52.502 Max Power: 25.00 W 00:14:52.502 Non-Operational State: Operational 00:14:52.502 Entry Latency: 16 microseconds 00:14:52.502 Exit Latency: 4 microseconds 00:14:52.502 Relative Read Throughput: 0 00:14:52.502 Relative Read Latency: 0 00:14:52.502 Relative Write Throughput: 0 00:14:52.502 Relative Write Latency: 0 00:14:52.502 Idle Power: Not Reported 00:14:52.502 Active Power: Not Reported 00:14:52.502 Non-Operational Permissive Mode: Not Supported 00:14:52.502 00:14:52.502 Health Information 00:14:52.502 ================== 00:14:52.502 Critical Warnings: 00:14:52.502 Available Spare Space: OK 00:14:52.502 Temperature: OK 00:14:52.502 Device Reliability: OK 00:14:52.502 Read Only: No 00:14:52.502 Volatile Memory Backup: OK 00:14:52.502 Current Temperature: 323 Kelvin (50 Celsius) 00:14:52.502 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:52.502 Available Spare: 0% 00:14:52.502 Available Spare Threshold: 0% 00:14:52.502 Life Percentage Used: 0% 00:14:52.502 Data Units Read: 2111 00:14:52.502 Data Units Written: 1898 00:14:52.502 Host Read Commands: 96229 00:14:52.502 Host Write Commands: 94498 00:14:52.502 Controller Busy Time: 0 minutes 00:14:52.502 Power Cycles: 0 00:14:52.502 Power On Hours: 0 hours 00:14:52.502 Unsafe Shutdowns: 0 00:14:52.502 Unrecoverable Media Errors: 0 00:14:52.502 Lifetime Error Log Entries: 0 00:14:52.502 Warning Temperature Time: 0 minutes 00:14:52.502 Critical Temperature Time: 0 minutes 00:14:52.502 00:14:52.502 Number of Queues 00:14:52.502 ================ 00:14:52.502 Number of I/O Submission Queues: 64 00:14:52.502 Number of I/O Completion Queues: 64 00:14:52.502 00:14:52.502 ZNS Specific Controller Data 00:14:52.502 ============================ 00:14:52.502 Zone Append Size Limit: 0 00:14:52.502 00:14:52.502 00:14:52.502 Active Namespaces 00:14:52.502 ================= 00:14:52.502 Namespace ID:1 00:14:52.502 Error Recovery Timeout: Unlimited 00:14:52.502 Command Set Identifier: NVM (00h) 00:14:52.502 Deallocate: Supported 00:14:52.502 Deallocated/Unwritten Error: Supported 00:14:52.502 Deallocated Read Value: All 0x00 00:14:52.502 Deallocate in Write Zeroes: Not Supported 00:14:52.502 Deallocated Guard Field: 0xFFFF 00:14:52.502 Flush: Supported 00:14:52.502 Reservation: Not Supported 00:14:52.502 Namespace Sharing Capabilities: Private 00:14:52.502 Size (in LBAs): 1048576 (4GiB) 00:14:52.502 Capacity (in LBAs): 1048576 (4GiB) 00:14:52.502 Utilization (in LBAs): 1048576 (4GiB) 00:14:52.502 Thin Provisioning: Not Supported 00:14:52.502 Per-NS Atomic Units: No 00:14:52.502 Maximum Single Source Range Length: 128 00:14:52.502 Maximum Copy Length: 128 00:14:52.502 Maximum Source Range Count: 128 00:14:52.502 NGUID/EUI64 Never Reused: No 00:14:52.502 Namespace Write Protected: No 00:14:52.502 Number of LBA Formats: 8 00:14:52.502 Current LBA Format: LBA Format #04 00:14:52.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.502 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.502 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.502 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.502 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.502 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.502 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.502 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.502 00:14:52.502 NVM Specific Namespace Data 00:14:52.502 =========================== 00:14:52.502 Logical Block Storage Tag Mask: 0 00:14:52.502 Protection Information Capabilities: 00:14:52.502 16b Guard Protection Information Storage Tag Support: No 00:14:52.502 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.502 Storage Tag Check Read Support: No 00:14:52.502 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Namespace ID:2 00:14:52.502 Error Recovery Timeout: Unlimited 00:14:52.502 Command Set Identifier: NVM (00h) 00:14:52.502 Deallocate: Supported 00:14:52.502 Deallocated/Unwritten Error: Supported 00:14:52.502 Deallocated Read Value: All 0x00 00:14:52.502 Deallocate in Write Zeroes: Not Supported 00:14:52.502 Deallocated Guard Field: 0xFFFF 00:14:52.502 Flush: Supported 00:14:52.502 Reservation: Not Supported 00:14:52.502 Namespace Sharing Capabilities: Private 00:14:52.502 Size (in LBAs): 1048576 (4GiB) 00:14:52.502 Capacity (in LBAs): 1048576 (4GiB) 00:14:52.502 Utilization (in LBAs): 1048576 (4GiB) 00:14:52.502 Thin Provisioning: Not Supported 00:14:52.502 Per-NS Atomic Units: No 00:14:52.502 Maximum Single Source Range Length: 128 00:14:52.502 Maximum Copy Length: 128 00:14:52.502 Maximum Source Range Count: 128 00:14:52.502 NGUID/EUI64 Never Reused: No 00:14:52.502 Namespace Write Protected: No 00:14:52.502 Number of LBA Formats: 8 00:14:52.502 Current LBA Format: LBA Format #04 00:14:52.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.502 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.502 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.502 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.502 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.502 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.502 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.502 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.502 00:14:52.502 NVM Specific Namespace Data 00:14:52.502 =========================== 00:14:52.502 Logical Block Storage Tag Mask: 0 00:14:52.502 Protection Information Capabilities: 00:14:52.502 16b Guard Protection Information Storage Tag Support: No 00:14:52.502 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.502 Storage Tag Check Read Support: No 00:14:52.502 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.502 Namespace ID:3 00:14:52.502 Error Recovery Timeout: Unlimited 00:14:52.502 Command Set Identifier: NVM (00h) 00:14:52.502 Deallocate: Supported 00:14:52.502 Deallocated/Unwritten Error: Supported 00:14:52.502 Deallocated Read Value: All 0x00 00:14:52.502 Deallocate in Write Zeroes: Not Supported 00:14:52.502 Deallocated Guard Field: 0xFFFF 00:14:52.502 Flush: Supported 00:14:52.502 Reservation: Not Supported 00:14:52.502 Namespace Sharing Capabilities: Private 00:14:52.502 Size (in LBAs): 1048576 (4GiB) 00:14:52.502 Capacity (in LBAs): 1048576 (4GiB) 00:14:52.502 Utilization (in LBAs): 1048576 (4GiB) 00:14:52.502 Thin Provisioning: Not Supported 00:14:52.502 Per-NS Atomic Units: No 00:14:52.502 Maximum Single Source Range Length: 128 00:14:52.502 Maximum Copy Length: 128 00:14:52.502 Maximum Source Range Count: 128 00:14:52.502 NGUID/EUI64 Never Reused: No 00:14:52.502 Namespace Write Protected: No 00:14:52.502 Number of LBA Formats: 8 00:14:52.502 Current LBA Format: LBA Format #04 00:14:52.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.502 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:52.502 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:52.502 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:52.502 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:52.502 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:52.502 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:52.502 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:52.502 00:14:52.502 NVM Specific Namespace Data 00:14:52.502 =========================== 00:14:52.502 Logical Block Storage Tag Mask: 0 00:14:52.503 Protection Information Capabilities: 00:14:52.503 16b Guard Protection Information Storage Tag Support: No 00:14:52.503 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:52.503 Storage Tag Check Read Support: No 00:14:52.503 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:52.503 13:48:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:52.503 13:48:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:14:53.073 ===================================================== 00:14:53.073 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:53.073 ===================================================== 00:14:53.073 Controller Capabilities/Features 00:14:53.073 ================================ 00:14:53.073 Vendor ID: 1b36 00:14:53.073 Subsystem Vendor ID: 1af4 00:14:53.073 Serial Number: 12340 00:14:53.073 Model Number: QEMU NVMe Ctrl 00:14:53.073 Firmware Version: 8.0.0 00:14:53.073 Recommended Arb Burst: 6 00:14:53.073 IEEE OUI Identifier: 00 54 52 00:14:53.073 Multi-path I/O 00:14:53.073 May have multiple subsystem ports: No 00:14:53.073 May have multiple controllers: No 00:14:53.073 Associated with SR-IOV VF: No 00:14:53.073 Max Data Transfer Size: 524288 00:14:53.073 Max Number of Namespaces: 256 00:14:53.073 Max Number of I/O Queues: 64 00:14:53.073 NVMe Specification Version (VS): 1.4 00:14:53.073 NVMe Specification Version (Identify): 1.4 00:14:53.073 Maximum Queue Entries: 2048 00:14:53.073 Contiguous Queues Required: Yes 00:14:53.073 Arbitration Mechanisms Supported 00:14:53.073 Weighted Round Robin: Not Supported 00:14:53.073 Vendor Specific: Not Supported 00:14:53.073 Reset Timeout: 7500 ms 00:14:53.073 Doorbell Stride: 4 bytes 00:14:53.073 NVM Subsystem Reset: Not Supported 00:14:53.073 Command Sets Supported 00:14:53.073 NVM Command Set: Supported 00:14:53.073 Boot Partition: Not Supported 00:14:53.073 Memory Page Size Minimum: 4096 bytes 00:14:53.073 Memory Page Size Maximum: 65536 bytes 00:14:53.073 Persistent Memory Region: Not Supported 00:14:53.073 Optional Asynchronous Events Supported 00:14:53.073 Namespace Attribute Notices: Supported 00:14:53.073 Firmware Activation Notices: Not Supported 00:14:53.073 ANA Change Notices: Not Supported 00:14:53.073 PLE Aggregate Log Change Notices: Not Supported 00:14:53.073 LBA Status Info Alert Notices: Not Supported 00:14:53.073 EGE Aggregate Log Change Notices: Not Supported 00:14:53.073 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.073 Zone Descriptor Change Notices: Not Supported 00:14:53.073 Discovery Log Change Notices: Not Supported 00:14:53.073 Controller Attributes 00:14:53.073 128-bit Host Identifier: Not Supported 00:14:53.073 Non-Operational Permissive Mode: Not Supported 00:14:53.073 NVM Sets: Not Supported 00:14:53.073 Read Recovery Levels: Not Supported 00:14:53.073 Endurance Groups: Not Supported 00:14:53.073 Predictable Latency Mode: Not Supported 00:14:53.073 Traffic Based Keep ALive: Not Supported 00:14:53.073 Namespace Granularity: Not Supported 00:14:53.073 SQ Associations: Not Supported 00:14:53.073 UUID List: Not Supported 00:14:53.073 Multi-Domain Subsystem: Not Supported 00:14:53.073 Fixed Capacity Management: Not Supported 00:14:53.073 Variable Capacity Management: Not Supported 00:14:53.073 Delete Endurance Group: Not Supported 00:14:53.073 Delete NVM Set: Not Supported 00:14:53.073 Extended LBA Formats Supported: Supported 00:14:53.073 Flexible Data Placement Supported: Not Supported 00:14:53.073 00:14:53.073 Controller Memory Buffer Support 00:14:53.073 ================================ 00:14:53.073 Supported: No 00:14:53.073 00:14:53.073 Persistent Memory Region Support 00:14:53.073 ================================ 00:14:53.073 Supported: No 00:14:53.073 00:14:53.073 Admin Command Set Attributes 00:14:53.073 ============================ 00:14:53.073 Security Send/Receive: Not Supported 00:14:53.073 Format NVM: Supported 00:14:53.073 Firmware Activate/Download: Not Supported 00:14:53.073 Namespace Management: Supported 00:14:53.073 Device Self-Test: Not Supported 00:14:53.073 Directives: Supported 00:14:53.073 NVMe-MI: Not Supported 00:14:53.073 Virtualization Management: Not Supported 00:14:53.073 Doorbell Buffer Config: Supported 00:14:53.073 Get LBA Status Capability: Not Supported 00:14:53.073 Command & Feature Lockdown Capability: Not Supported 00:14:53.073 Abort Command Limit: 4 00:14:53.073 Async Event Request Limit: 4 00:14:53.073 Number of Firmware Slots: N/A 00:14:53.073 Firmware Slot 1 Read-Only: N/A 00:14:53.073 Firmware Activation Without Reset: N/A 00:14:53.073 Multiple Update Detection Support: N/A 00:14:53.073 Firmware Update Granularity: No Information Provided 00:14:53.073 Per-Namespace SMART Log: Yes 00:14:53.073 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.073 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:53.073 Command Effects Log Page: Supported 00:14:53.073 Get Log Page Extended Data: Supported 00:14:53.073 Telemetry Log Pages: Not Supported 00:14:53.073 Persistent Event Log Pages: Not Supported 00:14:53.073 Supported Log Pages Log Page: May Support 00:14:53.073 Commands Supported & Effects Log Page: Not Supported 00:14:53.073 Feature Identifiers & Effects Log Page:May Support 00:14:53.073 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.073 Data Area 4 for Telemetry Log: Not Supported 00:14:53.073 Error Log Page Entries Supported: 1 00:14:53.073 Keep Alive: Not Supported 00:14:53.073 00:14:53.073 NVM Command Set Attributes 00:14:53.073 ========================== 00:14:53.073 Submission Queue Entry Size 00:14:53.073 Max: 64 00:14:53.073 Min: 64 00:14:53.073 Completion Queue Entry Size 00:14:53.073 Max: 16 00:14:53.073 Min: 16 00:14:53.073 Number of Namespaces: 256 00:14:53.073 Compare Command: Supported 00:14:53.073 Write Uncorrectable Command: Not Supported 00:14:53.073 Dataset Management Command: Supported 00:14:53.073 Write Zeroes Command: Supported 00:14:53.073 Set Features Save Field: Supported 00:14:53.073 Reservations: Not Supported 00:14:53.073 Timestamp: Supported 00:14:53.073 Copy: Supported 00:14:53.073 Volatile Write Cache: Present 00:14:53.073 Atomic Write Unit (Normal): 1 00:14:53.073 Atomic Write Unit (PFail): 1 00:14:53.073 Atomic Compare & Write Unit: 1 00:14:53.073 Fused Compare & Write: Not Supported 00:14:53.073 Scatter-Gather List 00:14:53.073 SGL Command Set: Supported 00:14:53.073 SGL Keyed: Not Supported 00:14:53.073 SGL Bit Bucket Descriptor: Not Supported 00:14:53.073 SGL Metadata Pointer: Not Supported 00:14:53.073 Oversized SGL: Not Supported 00:14:53.073 SGL Metadata Address: Not Supported 00:14:53.073 SGL Offset: Not Supported 00:14:53.073 Transport SGL Data Block: Not Supported 00:14:53.073 Replay Protected Memory Block: Not Supported 00:14:53.073 00:14:53.073 Firmware Slot Information 00:14:53.073 ========================= 00:14:53.073 Active slot: 1 00:14:53.073 Slot 1 Firmware Revision: 1.0 00:14:53.073 00:14:53.073 00:14:53.073 Commands Supported and Effects 00:14:53.073 ============================== 00:14:53.073 Admin Commands 00:14:53.073 -------------- 00:14:53.073 Delete I/O Submission Queue (00h): Supported 00:14:53.073 Create I/O Submission Queue (01h): Supported 00:14:53.073 Get Log Page (02h): Supported 00:14:53.073 Delete I/O Completion Queue (04h): Supported 00:14:53.073 Create I/O Completion Queue (05h): Supported 00:14:53.073 Identify (06h): Supported 00:14:53.073 Abort (08h): Supported 00:14:53.073 Set Features (09h): Supported 00:14:53.073 Get Features (0Ah): Supported 00:14:53.073 Asynchronous Event Request (0Ch): Supported 00:14:53.073 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:53.073 Directive Send (19h): Supported 00:14:53.073 Directive Receive (1Ah): Supported 00:14:53.073 Virtualization Management (1Ch): Supported 00:14:53.073 Doorbell Buffer Config (7Ch): Supported 00:14:53.073 Format NVM (80h): Supported LBA-Change 00:14:53.073 I/O Commands 00:14:53.073 ------------ 00:14:53.073 Flush (00h): Supported LBA-Change 00:14:53.073 Write (01h): Supported LBA-Change 00:14:53.073 Read (02h): Supported 00:14:53.073 Compare (05h): Supported 00:14:53.073 Write Zeroes (08h): Supported LBA-Change 00:14:53.073 Dataset Management (09h): Supported LBA-Change 00:14:53.073 Unknown (0Ch): Supported 00:14:53.073 Unknown (12h): Supported 00:14:53.073 Copy (19h): Supported LBA-Change 00:14:53.073 Unknown (1Dh): Supported LBA-Change 00:14:53.073 00:14:53.073 Error Log 00:14:53.073 ========= 00:14:53.073 00:14:53.073 Arbitration 00:14:53.073 =========== 00:14:53.073 Arbitration Burst: no limit 00:14:53.073 00:14:53.073 Power Management 00:14:53.073 ================ 00:14:53.073 Number of Power States: 1 00:14:53.073 Current Power State: Power State #0 00:14:53.073 Power State #0: 00:14:53.073 Max Power: 25.00 W 00:14:53.073 Non-Operational State: Operational 00:14:53.073 Entry Latency: 16 microseconds 00:14:53.073 Exit Latency: 4 microseconds 00:14:53.073 Relative Read Throughput: 0 00:14:53.073 Relative Read Latency: 0 00:14:53.073 Relative Write Throughput: 0 00:14:53.073 Relative Write Latency: 0 00:14:53.073 Idle Power: Not Reported 00:14:53.073 Active Power: Not Reported 00:14:53.073 Non-Operational Permissive Mode: Not Supported 00:14:53.073 00:14:53.073 Health Information 00:14:53.073 ================== 00:14:53.073 Critical Warnings: 00:14:53.073 Available Spare Space: OK 00:14:53.073 Temperature: OK 00:14:53.073 Device Reliability: OK 00:14:53.073 Read Only: No 00:14:53.073 Volatile Memory Backup: OK 00:14:53.073 Current Temperature: 323 Kelvin (50 Celsius) 00:14:53.073 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:53.073 Available Spare: 0% 00:14:53.073 Available Spare Threshold: 0% 00:14:53.073 Life Percentage Used: 0% 00:14:53.073 Data Units Read: 671 00:14:53.073 Data Units Written: 599 00:14:53.073 Host Read Commands: 31603 00:14:53.073 Host Write Commands: 31389 00:14:53.073 Controller Busy Time: 0 minutes 00:14:53.073 Power Cycles: 0 00:14:53.073 Power On Hours: 0 hours 00:14:53.073 Unsafe Shutdowns: 0 00:14:53.073 Unrecoverable Media Errors: 0 00:14:53.073 Lifetime Error Log Entries: 0 00:14:53.073 Warning Temperature Time: 0 minutes 00:14:53.074 Critical Temperature Time: 0 minutes 00:14:53.074 00:14:53.074 Number of Queues 00:14:53.074 ================ 00:14:53.074 Number of I/O Submission Queues: 64 00:14:53.074 Number of I/O Completion Queues: 64 00:14:53.074 00:14:53.074 ZNS Specific Controller Data 00:14:53.074 ============================ 00:14:53.074 Zone Append Size Limit: 0 00:14:53.074 00:14:53.074 00:14:53.074 Active Namespaces 00:14:53.074 ================= 00:14:53.074 Namespace ID:1 00:14:53.074 Error Recovery Timeout: Unlimited 00:14:53.074 Command Set Identifier: NVM (00h) 00:14:53.074 Deallocate: Supported 00:14:53.074 Deallocated/Unwritten Error: Supported 00:14:53.074 Deallocated Read Value: All 0x00 00:14:53.074 Deallocate in Write Zeroes: Not Supported 00:14:53.074 Deallocated Guard Field: 0xFFFF 00:14:53.074 Flush: Supported 00:14:53.074 Reservation: Not Supported 00:14:53.074 Metadata Transferred as: Separate Metadata Buffer 00:14:53.074 Namespace Sharing Capabilities: Private 00:14:53.074 Size (in LBAs): 1548666 (5GiB) 00:14:53.074 Capacity (in LBAs): 1548666 (5GiB) 00:14:53.074 Utilization (in LBAs): 1548666 (5GiB) 00:14:53.074 Thin Provisioning: Not Supported 00:14:53.074 Per-NS Atomic Units: No 00:14:53.074 Maximum Single Source Range Length: 128 00:14:53.074 Maximum Copy Length: 128 00:14:53.074 Maximum Source Range Count: 128 00:14:53.074 NGUID/EUI64 Never Reused: No 00:14:53.074 Namespace Write Protected: No 00:14:53.074 Number of LBA Formats: 8 00:14:53.074 Current LBA Format: LBA Format #07 00:14:53.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:53.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:53.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:53.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:53.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:53.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:53.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:53.074 00:14:53.074 NVM Specific Namespace Data 00:14:53.074 =========================== 00:14:53.074 Logical Block Storage Tag Mask: 0 00:14:53.074 Protection Information Capabilities: 00:14:53.074 16b Guard Protection Information Storage Tag Support: No 00:14:53.074 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:53.074 Storage Tag Check Read Support: No 00:14:53.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.074 13:48:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:53.074 13:48:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:14:53.333 ===================================================== 00:14:53.333 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:53.333 ===================================================== 00:14:53.333 Controller Capabilities/Features 00:14:53.333 ================================ 00:14:53.333 Vendor ID: 1b36 00:14:53.333 Subsystem Vendor ID: 1af4 00:14:53.333 Serial Number: 12341 00:14:53.333 Model Number: QEMU NVMe Ctrl 00:14:53.333 Firmware Version: 8.0.0 00:14:53.333 Recommended Arb Burst: 6 00:14:53.333 IEEE OUI Identifier: 00 54 52 00:14:53.333 Multi-path I/O 00:14:53.333 May have multiple subsystem ports: No 00:14:53.333 May have multiple controllers: No 00:14:53.333 Associated with SR-IOV VF: No 00:14:53.333 Max Data Transfer Size: 524288 00:14:53.333 Max Number of Namespaces: 256 00:14:53.333 Max Number of I/O Queues: 64 00:14:53.333 NVMe Specification Version (VS): 1.4 00:14:53.333 NVMe Specification Version (Identify): 1.4 00:14:53.333 Maximum Queue Entries: 2048 00:14:53.333 Contiguous Queues Required: Yes 00:14:53.333 Arbitration Mechanisms Supported 00:14:53.333 Weighted Round Robin: Not Supported 00:14:53.333 Vendor Specific: Not Supported 00:14:53.333 Reset Timeout: 7500 ms 00:14:53.333 Doorbell Stride: 4 bytes 00:14:53.333 NVM Subsystem Reset: Not Supported 00:14:53.333 Command Sets Supported 00:14:53.333 NVM Command Set: Supported 00:14:53.333 Boot Partition: Not Supported 00:14:53.333 Memory Page Size Minimum: 4096 bytes 00:14:53.333 Memory Page Size Maximum: 65536 bytes 00:14:53.333 Persistent Memory Region: Not Supported 00:14:53.333 Optional Asynchronous Events Supported 00:14:53.333 Namespace Attribute Notices: Supported 00:14:53.333 Firmware Activation Notices: Not Supported 00:14:53.333 ANA Change Notices: Not Supported 00:14:53.333 PLE Aggregate Log Change Notices: Not Supported 00:14:53.333 LBA Status Info Alert Notices: Not Supported 00:14:53.333 EGE Aggregate Log Change Notices: Not Supported 00:14:53.333 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.334 Zone Descriptor Change Notices: Not Supported 00:14:53.334 Discovery Log Change Notices: Not Supported 00:14:53.334 Controller Attributes 00:14:53.334 128-bit Host Identifier: Not Supported 00:14:53.334 Non-Operational Permissive Mode: Not Supported 00:14:53.334 NVM Sets: Not Supported 00:14:53.334 Read Recovery Levels: Not Supported 00:14:53.334 Endurance Groups: Not Supported 00:14:53.334 Predictable Latency Mode: Not Supported 00:14:53.334 Traffic Based Keep ALive: Not Supported 00:14:53.334 Namespace Granularity: Not Supported 00:14:53.334 SQ Associations: Not Supported 00:14:53.334 UUID List: Not Supported 00:14:53.334 Multi-Domain Subsystem: Not Supported 00:14:53.334 Fixed Capacity Management: Not Supported 00:14:53.334 Variable Capacity Management: Not Supported 00:14:53.334 Delete Endurance Group: Not Supported 00:14:53.334 Delete NVM Set: Not Supported 00:14:53.334 Extended LBA Formats Supported: Supported 00:14:53.334 Flexible Data Placement Supported: Not Supported 00:14:53.334 00:14:53.334 Controller Memory Buffer Support 00:14:53.334 ================================ 00:14:53.334 Supported: No 00:14:53.334 00:14:53.334 Persistent Memory Region Support 00:14:53.334 ================================ 00:14:53.334 Supported: No 00:14:53.334 00:14:53.334 Admin Command Set Attributes 00:14:53.334 ============================ 00:14:53.334 Security Send/Receive: Not Supported 00:14:53.334 Format NVM: Supported 00:14:53.334 Firmware Activate/Download: Not Supported 00:14:53.334 Namespace Management: Supported 00:14:53.334 Device Self-Test: Not Supported 00:14:53.334 Directives: Supported 00:14:53.334 NVMe-MI: Not Supported 00:14:53.334 Virtualization Management: Not Supported 00:14:53.334 Doorbell Buffer Config: Supported 00:14:53.334 Get LBA Status Capability: Not Supported 00:14:53.334 Command & Feature Lockdown Capability: Not Supported 00:14:53.334 Abort Command Limit: 4 00:14:53.334 Async Event Request Limit: 4 00:14:53.334 Number of Firmware Slots: N/A 00:14:53.334 Firmware Slot 1 Read-Only: N/A 00:14:53.334 Firmware Activation Without Reset: N/A 00:14:53.334 Multiple Update Detection Support: N/A 00:14:53.334 Firmware Update Granularity: No Information Provided 00:14:53.334 Per-Namespace SMART Log: Yes 00:14:53.334 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.334 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:53.334 Command Effects Log Page: Supported 00:14:53.334 Get Log Page Extended Data: Supported 00:14:53.334 Telemetry Log Pages: Not Supported 00:14:53.334 Persistent Event Log Pages: Not Supported 00:14:53.334 Supported Log Pages Log Page: May Support 00:14:53.334 Commands Supported & Effects Log Page: Not Supported 00:14:53.334 Feature Identifiers & Effects Log Page:May Support 00:14:53.334 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.334 Data Area 4 for Telemetry Log: Not Supported 00:14:53.334 Error Log Page Entries Supported: 1 00:14:53.334 Keep Alive: Not Supported 00:14:53.334 00:14:53.334 NVM Command Set Attributes 00:14:53.334 ========================== 00:14:53.334 Submission Queue Entry Size 00:14:53.334 Max: 64 00:14:53.334 Min: 64 00:14:53.334 Completion Queue Entry Size 00:14:53.334 Max: 16 00:14:53.334 Min: 16 00:14:53.334 Number of Namespaces: 256 00:14:53.334 Compare Command: Supported 00:14:53.334 Write Uncorrectable Command: Not Supported 00:14:53.334 Dataset Management Command: Supported 00:14:53.334 Write Zeroes Command: Supported 00:14:53.334 Set Features Save Field: Supported 00:14:53.334 Reservations: Not Supported 00:14:53.334 Timestamp: Supported 00:14:53.334 Copy: Supported 00:14:53.334 Volatile Write Cache: Present 00:14:53.334 Atomic Write Unit (Normal): 1 00:14:53.334 Atomic Write Unit (PFail): 1 00:14:53.334 Atomic Compare & Write Unit: 1 00:14:53.334 Fused Compare & Write: Not Supported 00:14:53.334 Scatter-Gather List 00:14:53.334 SGL Command Set: Supported 00:14:53.334 SGL Keyed: Not Supported 00:14:53.334 SGL Bit Bucket Descriptor: Not Supported 00:14:53.334 SGL Metadata Pointer: Not Supported 00:14:53.334 Oversized SGL: Not Supported 00:14:53.334 SGL Metadata Address: Not Supported 00:14:53.334 SGL Offset: Not Supported 00:14:53.334 Transport SGL Data Block: Not Supported 00:14:53.334 Replay Protected Memory Block: Not Supported 00:14:53.334 00:14:53.334 Firmware Slot Information 00:14:53.334 ========================= 00:14:53.334 Active slot: 1 00:14:53.334 Slot 1 Firmware Revision: 1.0 00:14:53.334 00:14:53.334 00:14:53.334 Commands Supported and Effects 00:14:53.334 ============================== 00:14:53.334 Admin Commands 00:14:53.334 -------------- 00:14:53.334 Delete I/O Submission Queue (00h): Supported 00:14:53.334 Create I/O Submission Queue (01h): Supported 00:14:53.334 Get Log Page (02h): Supported 00:14:53.334 Delete I/O Completion Queue (04h): Supported 00:14:53.334 Create I/O Completion Queue (05h): Supported 00:14:53.334 Identify (06h): Supported 00:14:53.334 Abort (08h): Supported 00:14:53.334 Set Features (09h): Supported 00:14:53.334 Get Features (0Ah): Supported 00:14:53.334 Asynchronous Event Request (0Ch): Supported 00:14:53.334 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:53.334 Directive Send (19h): Supported 00:14:53.334 Directive Receive (1Ah): Supported 00:14:53.334 Virtualization Management (1Ch): Supported 00:14:53.334 Doorbell Buffer Config (7Ch): Supported 00:14:53.334 Format NVM (80h): Supported LBA-Change 00:14:53.334 I/O Commands 00:14:53.334 ------------ 00:14:53.334 Flush (00h): Supported LBA-Change 00:14:53.334 Write (01h): Supported LBA-Change 00:14:53.334 Read (02h): Supported 00:14:53.334 Compare (05h): Supported 00:14:53.334 Write Zeroes (08h): Supported LBA-Change 00:14:53.334 Dataset Management (09h): Supported LBA-Change 00:14:53.334 Unknown (0Ch): Supported 00:14:53.334 Unknown (12h): Supported 00:14:53.334 Copy (19h): Supported LBA-Change 00:14:53.334 Unknown (1Dh): Supported LBA-Change 00:14:53.334 00:14:53.334 Error Log 00:14:53.334 ========= 00:14:53.334 00:14:53.334 Arbitration 00:14:53.334 =========== 00:14:53.334 Arbitration Burst: no limit 00:14:53.334 00:14:53.334 Power Management 00:14:53.334 ================ 00:14:53.334 Number of Power States: 1 00:14:53.334 Current Power State: Power State #0 00:14:53.334 Power State #0: 00:14:53.334 Max Power: 25.00 W 00:14:53.334 Non-Operational State: Operational 00:14:53.334 Entry Latency: 16 microseconds 00:14:53.334 Exit Latency: 4 microseconds 00:14:53.334 Relative Read Throughput: 0 00:14:53.334 Relative Read Latency: 0 00:14:53.334 Relative Write Throughput: 0 00:14:53.334 Relative Write Latency: 0 00:14:53.334 Idle Power: Not Reported 00:14:53.334 Active Power: Not Reported 00:14:53.334 Non-Operational Permissive Mode: Not Supported 00:14:53.334 00:14:53.334 Health Information 00:14:53.334 ================== 00:14:53.334 Critical Warnings: 00:14:53.334 Available Spare Space: OK 00:14:53.334 Temperature: OK 00:14:53.334 Device Reliability: OK 00:14:53.334 Read Only: No 00:14:53.334 Volatile Memory Backup: OK 00:14:53.334 Current Temperature: 323 Kelvin (50 Celsius) 00:14:53.334 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:53.334 Available Spare: 0% 00:14:53.334 Available Spare Threshold: 0% 00:14:53.334 Life Percentage Used: 0% 00:14:53.334 Data Units Read: 1039 00:14:53.334 Data Units Written: 900 00:14:53.334 Host Read Commands: 47296 00:14:53.334 Host Write Commands: 45978 00:14:53.334 Controller Busy Time: 0 minutes 00:14:53.334 Power Cycles: 0 00:14:53.334 Power On Hours: 0 hours 00:14:53.334 Unsafe Shutdowns: 0 00:14:53.334 Unrecoverable Media Errors: 0 00:14:53.334 Lifetime Error Log Entries: 0 00:14:53.334 Warning Temperature Time: 0 minutes 00:14:53.334 Critical Temperature Time: 0 minutes 00:14:53.334 00:14:53.334 Number of Queues 00:14:53.334 ================ 00:14:53.334 Number of I/O Submission Queues: 64 00:14:53.334 Number of I/O Completion Queues: 64 00:14:53.334 00:14:53.334 ZNS Specific Controller Data 00:14:53.334 ============================ 00:14:53.334 Zone Append Size Limit: 0 00:14:53.334 00:14:53.334 00:14:53.334 Active Namespaces 00:14:53.334 ================= 00:14:53.334 Namespace ID:1 00:14:53.334 Error Recovery Timeout: Unlimited 00:14:53.334 Command Set Identifier: NVM (00h) 00:14:53.334 Deallocate: Supported 00:14:53.334 Deallocated/Unwritten Error: Supported 00:14:53.334 Deallocated Read Value: All 0x00 00:14:53.334 Deallocate in Write Zeroes: Not Supported 00:14:53.334 Deallocated Guard Field: 0xFFFF 00:14:53.334 Flush: Supported 00:14:53.334 Reservation: Not Supported 00:14:53.334 Namespace Sharing Capabilities: Private 00:14:53.334 Size (in LBAs): 1310720 (5GiB) 00:14:53.334 Capacity (in LBAs): 1310720 (5GiB) 00:14:53.334 Utilization (in LBAs): 1310720 (5GiB) 00:14:53.334 Thin Provisioning: Not Supported 00:14:53.334 Per-NS Atomic Units: No 00:14:53.335 Maximum Single Source Range Length: 128 00:14:53.335 Maximum Copy Length: 128 00:14:53.335 Maximum Source Range Count: 128 00:14:53.335 NGUID/EUI64 Never Reused: No 00:14:53.335 Namespace Write Protected: No 00:14:53.335 Number of LBA Formats: 8 00:14:53.335 Current LBA Format: LBA Format #04 00:14:53.335 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.335 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:53.335 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:53.335 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:53.335 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:53.335 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:53.335 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:53.335 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:53.335 00:14:53.335 NVM Specific Namespace Data 00:14:53.335 =========================== 00:14:53.335 Logical Block Storage Tag Mask: 0 00:14:53.335 Protection Information Capabilities: 00:14:53.335 16b Guard Protection Information Storage Tag Support: No 00:14:53.335 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:53.335 Storage Tag Check Read Support: No 00:14:53.335 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.335 13:48:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:53.335 13:48:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:14:53.903 ===================================================== 00:14:53.903 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:53.903 ===================================================== 00:14:53.903 Controller Capabilities/Features 00:14:53.903 ================================ 00:14:53.903 Vendor ID: 1b36 00:14:53.903 Subsystem Vendor ID: 1af4 00:14:53.903 Serial Number: 12342 00:14:53.903 Model Number: QEMU NVMe Ctrl 00:14:53.903 Firmware Version: 8.0.0 00:14:53.903 Recommended Arb Burst: 6 00:14:53.903 IEEE OUI Identifier: 00 54 52 00:14:53.903 Multi-path I/O 00:14:53.903 May have multiple subsystem ports: No 00:14:53.903 May have multiple controllers: No 00:14:53.903 Associated with SR-IOV VF: No 00:14:53.903 Max Data Transfer Size: 524288 00:14:53.903 Max Number of Namespaces: 256 00:14:53.903 Max Number of I/O Queues: 64 00:14:53.903 NVMe Specification Version (VS): 1.4 00:14:53.903 NVMe Specification Version (Identify): 1.4 00:14:53.903 Maximum Queue Entries: 2048 00:14:53.903 Contiguous Queues Required: Yes 00:14:53.903 Arbitration Mechanisms Supported 00:14:53.903 Weighted Round Robin: Not Supported 00:14:53.903 Vendor Specific: Not Supported 00:14:53.903 Reset Timeout: 7500 ms 00:14:53.903 Doorbell Stride: 4 bytes 00:14:53.903 NVM Subsystem Reset: Not Supported 00:14:53.903 Command Sets Supported 00:14:53.903 NVM Command Set: Supported 00:14:53.903 Boot Partition: Not Supported 00:14:53.904 Memory Page Size Minimum: 4096 bytes 00:14:53.904 Memory Page Size Maximum: 65536 bytes 00:14:53.904 Persistent Memory Region: Not Supported 00:14:53.904 Optional Asynchronous Events Supported 00:14:53.904 Namespace Attribute Notices: Supported 00:14:53.904 Firmware Activation Notices: Not Supported 00:14:53.904 ANA Change Notices: Not Supported 00:14:53.904 PLE Aggregate Log Change Notices: Not Supported 00:14:53.904 LBA Status Info Alert Notices: Not Supported 00:14:53.904 EGE Aggregate Log Change Notices: Not Supported 00:14:53.904 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.904 Zone Descriptor Change Notices: Not Supported 00:14:53.904 Discovery Log Change Notices: Not Supported 00:14:53.904 Controller Attributes 00:14:53.904 128-bit Host Identifier: Not Supported 00:14:53.904 Non-Operational Permissive Mode: Not Supported 00:14:53.904 NVM Sets: Not Supported 00:14:53.904 Read Recovery Levels: Not Supported 00:14:53.904 Endurance Groups: Not Supported 00:14:53.904 Predictable Latency Mode: Not Supported 00:14:53.904 Traffic Based Keep ALive: Not Supported 00:14:53.904 Namespace Granularity: Not Supported 00:14:53.904 SQ Associations: Not Supported 00:14:53.904 UUID List: Not Supported 00:14:53.904 Multi-Domain Subsystem: Not Supported 00:14:53.904 Fixed Capacity Management: Not Supported 00:14:53.904 Variable Capacity Management: Not Supported 00:14:53.904 Delete Endurance Group: Not Supported 00:14:53.904 Delete NVM Set: Not Supported 00:14:53.904 Extended LBA Formats Supported: Supported 00:14:53.904 Flexible Data Placement Supported: Not Supported 00:14:53.904 00:14:53.904 Controller Memory Buffer Support 00:14:53.904 ================================ 00:14:53.904 Supported: No 00:14:53.904 00:14:53.904 Persistent Memory Region Support 00:14:53.904 ================================ 00:14:53.904 Supported: No 00:14:53.904 00:14:53.904 Admin Command Set Attributes 00:14:53.904 ============================ 00:14:53.904 Security Send/Receive: Not Supported 00:14:53.904 Format NVM: Supported 00:14:53.904 Firmware Activate/Download: Not Supported 00:14:53.904 Namespace Management: Supported 00:14:53.904 Device Self-Test: Not Supported 00:14:53.904 Directives: Supported 00:14:53.904 NVMe-MI: Not Supported 00:14:53.904 Virtualization Management: Not Supported 00:14:53.904 Doorbell Buffer Config: Supported 00:14:53.904 Get LBA Status Capability: Not Supported 00:14:53.904 Command & Feature Lockdown Capability: Not Supported 00:14:53.904 Abort Command Limit: 4 00:14:53.904 Async Event Request Limit: 4 00:14:53.904 Number of Firmware Slots: N/A 00:14:53.904 Firmware Slot 1 Read-Only: N/A 00:14:53.904 Firmware Activation Without Reset: N/A 00:14:53.904 Multiple Update Detection Support: N/A 00:14:53.904 Firmware Update Granularity: No Information Provided 00:14:53.904 Per-Namespace SMART Log: Yes 00:14:53.904 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.904 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:53.904 Command Effects Log Page: Supported 00:14:53.904 Get Log Page Extended Data: Supported 00:14:53.904 Telemetry Log Pages: Not Supported 00:14:53.904 Persistent Event Log Pages: Not Supported 00:14:53.904 Supported Log Pages Log Page: May Support 00:14:53.904 Commands Supported & Effects Log Page: Not Supported 00:14:53.904 Feature Identifiers & Effects Log Page:May Support 00:14:53.904 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.904 Data Area 4 for Telemetry Log: Not Supported 00:14:53.904 Error Log Page Entries Supported: 1 00:14:53.904 Keep Alive: Not Supported 00:14:53.904 00:14:53.904 NVM Command Set Attributes 00:14:53.904 ========================== 00:14:53.904 Submission Queue Entry Size 00:14:53.904 Max: 64 00:14:53.904 Min: 64 00:14:53.904 Completion Queue Entry Size 00:14:53.904 Max: 16 00:14:53.904 Min: 16 00:14:53.904 Number of Namespaces: 256 00:14:53.904 Compare Command: Supported 00:14:53.904 Write Uncorrectable Command: Not Supported 00:14:53.904 Dataset Management Command: Supported 00:14:53.904 Write Zeroes Command: Supported 00:14:53.904 Set Features Save Field: Supported 00:14:53.904 Reservations: Not Supported 00:14:53.904 Timestamp: Supported 00:14:53.904 Copy: Supported 00:14:53.904 Volatile Write Cache: Present 00:14:53.904 Atomic Write Unit (Normal): 1 00:14:53.904 Atomic Write Unit (PFail): 1 00:14:53.904 Atomic Compare & Write Unit: 1 00:14:53.904 Fused Compare & Write: Not Supported 00:14:53.904 Scatter-Gather List 00:14:53.904 SGL Command Set: Supported 00:14:53.904 SGL Keyed: Not Supported 00:14:53.904 SGL Bit Bucket Descriptor: Not Supported 00:14:53.904 SGL Metadata Pointer: Not Supported 00:14:53.904 Oversized SGL: Not Supported 00:14:53.904 SGL Metadata Address: Not Supported 00:14:53.904 SGL Offset: Not Supported 00:14:53.904 Transport SGL Data Block: Not Supported 00:14:53.904 Replay Protected Memory Block: Not Supported 00:14:53.904 00:14:53.904 Firmware Slot Information 00:14:53.904 ========================= 00:14:53.904 Active slot: 1 00:14:53.904 Slot 1 Firmware Revision: 1.0 00:14:53.904 00:14:53.904 00:14:53.904 Commands Supported and Effects 00:14:53.904 ============================== 00:14:53.904 Admin Commands 00:14:53.904 -------------- 00:14:53.904 Delete I/O Submission Queue (00h): Supported 00:14:53.904 Create I/O Submission Queue (01h): Supported 00:14:53.904 Get Log Page (02h): Supported 00:14:53.904 Delete I/O Completion Queue (04h): Supported 00:14:53.904 Create I/O Completion Queue (05h): Supported 00:14:53.904 Identify (06h): Supported 00:14:53.904 Abort (08h): Supported 00:14:53.904 Set Features (09h): Supported 00:14:53.904 Get Features (0Ah): Supported 00:14:53.904 Asynchronous Event Request (0Ch): Supported 00:14:53.904 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:53.904 Directive Send (19h): Supported 00:14:53.904 Directive Receive (1Ah): Supported 00:14:53.904 Virtualization Management (1Ch): Supported 00:14:53.904 Doorbell Buffer Config (7Ch): Supported 00:14:53.904 Format NVM (80h): Supported LBA-Change 00:14:53.904 I/O Commands 00:14:53.904 ------------ 00:14:53.904 Flush (00h): Supported LBA-Change 00:14:53.904 Write (01h): Supported LBA-Change 00:14:53.904 Read (02h): Supported 00:14:53.904 Compare (05h): Supported 00:14:53.904 Write Zeroes (08h): Supported LBA-Change 00:14:53.904 Dataset Management (09h): Supported LBA-Change 00:14:53.904 Unknown (0Ch): Supported 00:14:53.904 Unknown (12h): Supported 00:14:53.904 Copy (19h): Supported LBA-Change 00:14:53.904 Unknown (1Dh): Supported LBA-Change 00:14:53.904 00:14:53.904 Error Log 00:14:53.904 ========= 00:14:53.904 00:14:53.904 Arbitration 00:14:53.904 =========== 00:14:53.904 Arbitration Burst: no limit 00:14:53.904 00:14:53.904 Power Management 00:14:53.904 ================ 00:14:53.904 Number of Power States: 1 00:14:53.904 Current Power State: Power State #0 00:14:53.904 Power State #0: 00:14:53.904 Max Power: 25.00 W 00:14:53.904 Non-Operational State: Operational 00:14:53.904 Entry Latency: 16 microseconds 00:14:53.904 Exit Latency: 4 microseconds 00:14:53.904 Relative Read Throughput: 0 00:14:53.904 Relative Read Latency: 0 00:14:53.904 Relative Write Throughput: 0 00:14:53.904 Relative Write Latency: 0 00:14:53.904 Idle Power: Not Reported 00:14:53.904 Active Power: Not Reported 00:14:53.904 Non-Operational Permissive Mode: Not Supported 00:14:53.904 00:14:53.904 Health Information 00:14:53.904 ================== 00:14:53.904 Critical Warnings: 00:14:53.904 Available Spare Space: OK 00:14:53.904 Temperature: OK 00:14:53.904 Device Reliability: OK 00:14:53.904 Read Only: No 00:14:53.904 Volatile Memory Backup: OK 00:14:53.904 Current Temperature: 323 Kelvin (50 Celsius) 00:14:53.904 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:53.904 Available Spare: 0% 00:14:53.904 Available Spare Threshold: 0% 00:14:53.904 Life Percentage Used: 0% 00:14:53.904 Data Units Read: 2111 00:14:53.904 Data Units Written: 1898 00:14:53.904 Host Read Commands: 96229 00:14:53.904 Host Write Commands: 94498 00:14:53.904 Controller Busy Time: 0 minutes 00:14:53.904 Power Cycles: 0 00:14:53.904 Power On Hours: 0 hours 00:14:53.904 Unsafe Shutdowns: 0 00:14:53.904 Unrecoverable Media Errors: 0 00:14:53.904 Lifetime Error Log Entries: 0 00:14:53.904 Warning Temperature Time: 0 minutes 00:14:53.904 Critical Temperature Time: 0 minutes 00:14:53.904 00:14:53.904 Number of Queues 00:14:53.904 ================ 00:14:53.904 Number of I/O Submission Queues: 64 00:14:53.904 Number of I/O Completion Queues: 64 00:14:53.904 00:14:53.904 ZNS Specific Controller Data 00:14:53.904 ============================ 00:14:53.904 Zone Append Size Limit: 0 00:14:53.904 00:14:53.904 00:14:53.904 Active Namespaces 00:14:53.905 ================= 00:14:53.905 Namespace ID:1 00:14:53.905 Error Recovery Timeout: Unlimited 00:14:53.905 Command Set Identifier: NVM (00h) 00:14:53.905 Deallocate: Supported 00:14:53.905 Deallocated/Unwritten Error: Supported 00:14:53.905 Deallocated Read Value: All 0x00 00:14:53.905 Deallocate in Write Zeroes: Not Supported 00:14:53.905 Deallocated Guard Field: 0xFFFF 00:14:53.905 Flush: Supported 00:14:53.905 Reservation: Not Supported 00:14:53.905 Namespace Sharing Capabilities: Private 00:14:53.905 Size (in LBAs): 1048576 (4GiB) 00:14:53.905 Capacity (in LBAs): 1048576 (4GiB) 00:14:53.905 Utilization (in LBAs): 1048576 (4GiB) 00:14:53.905 Thin Provisioning: Not Supported 00:14:53.905 Per-NS Atomic Units: No 00:14:53.905 Maximum Single Source Range Length: 128 00:14:53.905 Maximum Copy Length: 128 00:14:53.905 Maximum Source Range Count: 128 00:14:53.905 NGUID/EUI64 Never Reused: No 00:14:53.905 Namespace Write Protected: No 00:14:53.905 Number of LBA Formats: 8 00:14:53.905 Current LBA Format: LBA Format #04 00:14:53.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:53.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:53.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:53.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:53.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:53.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:53.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:53.905 00:14:53.905 NVM Specific Namespace Data 00:14:53.905 =========================== 00:14:53.905 Logical Block Storage Tag Mask: 0 00:14:53.905 Protection Information Capabilities: 00:14:53.905 16b Guard Protection Information Storage Tag Support: No 00:14:53.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:53.905 Storage Tag Check Read Support: No 00:14:53.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Namespace ID:2 00:14:53.905 Error Recovery Timeout: Unlimited 00:14:53.905 Command Set Identifier: NVM (00h) 00:14:53.905 Deallocate: Supported 00:14:53.905 Deallocated/Unwritten Error: Supported 00:14:53.905 Deallocated Read Value: All 0x00 00:14:53.905 Deallocate in Write Zeroes: Not Supported 00:14:53.905 Deallocated Guard Field: 0xFFFF 00:14:53.905 Flush: Supported 00:14:53.905 Reservation: Not Supported 00:14:53.905 Namespace Sharing Capabilities: Private 00:14:53.905 Size (in LBAs): 1048576 (4GiB) 00:14:53.905 Capacity (in LBAs): 1048576 (4GiB) 00:14:53.905 Utilization (in LBAs): 1048576 (4GiB) 00:14:53.905 Thin Provisioning: Not Supported 00:14:53.905 Per-NS Atomic Units: No 00:14:53.905 Maximum Single Source Range Length: 128 00:14:53.905 Maximum Copy Length: 128 00:14:53.905 Maximum Source Range Count: 128 00:14:53.905 NGUID/EUI64 Never Reused: No 00:14:53.905 Namespace Write Protected: No 00:14:53.905 Number of LBA Formats: 8 00:14:53.905 Current LBA Format: LBA Format #04 00:14:53.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:53.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:53.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:53.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:53.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:53.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:53.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:53.905 00:14:53.905 NVM Specific Namespace Data 00:14:53.905 =========================== 00:14:53.905 Logical Block Storage Tag Mask: 0 00:14:53.905 Protection Information Capabilities: 00:14:53.905 16b Guard Protection Information Storage Tag Support: No 00:14:53.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:53.905 Storage Tag Check Read Support: No 00:14:53.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Namespace ID:3 00:14:53.905 Error Recovery Timeout: Unlimited 00:14:53.905 Command Set Identifier: NVM (00h) 00:14:53.905 Deallocate: Supported 00:14:53.905 Deallocated/Unwritten Error: Supported 00:14:53.905 Deallocated Read Value: All 0x00 00:14:53.905 Deallocate in Write Zeroes: Not Supported 00:14:53.905 Deallocated Guard Field: 0xFFFF 00:14:53.905 Flush: Supported 00:14:53.905 Reservation: Not Supported 00:14:53.905 Namespace Sharing Capabilities: Private 00:14:53.905 Size (in LBAs): 1048576 (4GiB) 00:14:53.905 Capacity (in LBAs): 1048576 (4GiB) 00:14:53.905 Utilization (in LBAs): 1048576 (4GiB) 00:14:53.905 Thin Provisioning: Not Supported 00:14:53.905 Per-NS Atomic Units: No 00:14:53.905 Maximum Single Source Range Length: 128 00:14:53.905 Maximum Copy Length: 128 00:14:53.905 Maximum Source Range Count: 128 00:14:53.905 NGUID/EUI64 Never Reused: No 00:14:53.905 Namespace Write Protected: No 00:14:53.905 Number of LBA Formats: 8 00:14:53.905 Current LBA Format: LBA Format #04 00:14:53.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:53.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:53.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:53.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:53.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:53.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:53.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:53.905 00:14:53.905 NVM Specific Namespace Data 00:14:53.905 =========================== 00:14:53.905 Logical Block Storage Tag Mask: 0 00:14:53.905 Protection Information Capabilities: 00:14:53.905 16b Guard Protection Information Storage Tag Support: No 00:14:53.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:53.905 Storage Tag Check Read Support: No 00:14:53.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:53.905 13:48:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:53.905 13:48:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:14:54.167 ===================================================== 00:14:54.167 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:54.167 ===================================================== 00:14:54.167 Controller Capabilities/Features 00:14:54.167 ================================ 00:14:54.167 Vendor ID: 1b36 00:14:54.167 Subsystem Vendor ID: 1af4 00:14:54.167 Serial Number: 12343 00:14:54.167 Model Number: QEMU NVMe Ctrl 00:14:54.167 Firmware Version: 8.0.0 00:14:54.167 Recommended Arb Burst: 6 00:14:54.167 IEEE OUI Identifier: 00 54 52 00:14:54.167 Multi-path I/O 00:14:54.167 May have multiple subsystem ports: No 00:14:54.167 May have multiple controllers: Yes 00:14:54.167 Associated with SR-IOV VF: No 00:14:54.167 Max Data Transfer Size: 524288 00:14:54.167 Max Number of Namespaces: 256 00:14:54.167 Max Number of I/O Queues: 64 00:14:54.167 NVMe Specification Version (VS): 1.4 00:14:54.167 NVMe Specification Version (Identify): 1.4 00:14:54.167 Maximum Queue Entries: 2048 00:14:54.167 Contiguous Queues Required: Yes 00:14:54.167 Arbitration Mechanisms Supported 00:14:54.167 Weighted Round Robin: Not Supported 00:14:54.167 Vendor Specific: Not Supported 00:14:54.167 Reset Timeout: 7500 ms 00:14:54.167 Doorbell Stride: 4 bytes 00:14:54.167 NVM Subsystem Reset: Not Supported 00:14:54.167 Command Sets Supported 00:14:54.167 NVM Command Set: Supported 00:14:54.167 Boot Partition: Not Supported 00:14:54.167 Memory Page Size Minimum: 4096 bytes 00:14:54.167 Memory Page Size Maximum: 65536 bytes 00:14:54.167 Persistent Memory Region: Not Supported 00:14:54.167 Optional Asynchronous Events Supported 00:14:54.167 Namespace Attribute Notices: Supported 00:14:54.167 Firmware Activation Notices: Not Supported 00:14:54.167 ANA Change Notices: Not Supported 00:14:54.167 PLE Aggregate Log Change Notices: Not Supported 00:14:54.167 LBA Status Info Alert Notices: Not Supported 00:14:54.167 EGE Aggregate Log Change Notices: Not Supported 00:14:54.167 Normal NVM Subsystem Shutdown event: Not Supported 00:14:54.167 Zone Descriptor Change Notices: Not Supported 00:14:54.167 Discovery Log Change Notices: Not Supported 00:14:54.167 Controller Attributes 00:14:54.167 128-bit Host Identifier: Not Supported 00:14:54.167 Non-Operational Permissive Mode: Not Supported 00:14:54.167 NVM Sets: Not Supported 00:14:54.167 Read Recovery Levels: Not Supported 00:14:54.167 Endurance Groups: Supported 00:14:54.167 Predictable Latency Mode: Not Supported 00:14:54.167 Traffic Based Keep ALive: Not Supported 00:14:54.167 Namespace Granularity: Not Supported 00:14:54.167 SQ Associations: Not Supported 00:14:54.167 UUID List: Not Supported 00:14:54.167 Multi-Domain Subsystem: Not Supported 00:14:54.167 Fixed Capacity Management: Not Supported 00:14:54.167 Variable Capacity Management: Not Supported 00:14:54.167 Delete Endurance Group: Not Supported 00:14:54.167 Delete NVM Set: Not Supported 00:14:54.167 Extended LBA Formats Supported: Supported 00:14:54.167 Flexible Data Placement Supported: Supported 00:14:54.167 00:14:54.167 Controller Memory Buffer Support 00:14:54.167 ================================ 00:14:54.167 Supported: No 00:14:54.167 00:14:54.167 Persistent Memory Region Support 00:14:54.167 ================================ 00:14:54.167 Supported: No 00:14:54.167 00:14:54.167 Admin Command Set Attributes 00:14:54.167 ============================ 00:14:54.167 Security Send/Receive: Not Supported 00:14:54.167 Format NVM: Supported 00:14:54.167 Firmware Activate/Download: Not Supported 00:14:54.167 Namespace Management: Supported 00:14:54.167 Device Self-Test: Not Supported 00:14:54.167 Directives: Supported 00:14:54.167 NVMe-MI: Not Supported 00:14:54.167 Virtualization Management: Not Supported 00:14:54.167 Doorbell Buffer Config: Supported 00:14:54.167 Get LBA Status Capability: Not Supported 00:14:54.167 Command & Feature Lockdown Capability: Not Supported 00:14:54.167 Abort Command Limit: 4 00:14:54.167 Async Event Request Limit: 4 00:14:54.167 Number of Firmware Slots: N/A 00:14:54.167 Firmware Slot 1 Read-Only: N/A 00:14:54.167 Firmware Activation Without Reset: N/A 00:14:54.167 Multiple Update Detection Support: N/A 00:14:54.167 Firmware Update Granularity: No Information Provided 00:14:54.167 Per-Namespace SMART Log: Yes 00:14:54.167 Asymmetric Namespace Access Log Page: Not Supported 00:14:54.167 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:54.167 Command Effects Log Page: Supported 00:14:54.167 Get Log Page Extended Data: Supported 00:14:54.167 Telemetry Log Pages: Not Supported 00:14:54.167 Persistent Event Log Pages: Not Supported 00:14:54.167 Supported Log Pages Log Page: May Support 00:14:54.167 Commands Supported & Effects Log Page: Not Supported 00:14:54.167 Feature Identifiers & Effects Log Page:May Support 00:14:54.167 NVMe-MI Commands & Effects Log Page: May Support 00:14:54.167 Data Area 4 for Telemetry Log: Not Supported 00:14:54.167 Error Log Page Entries Supported: 1 00:14:54.167 Keep Alive: Not Supported 00:14:54.167 00:14:54.167 NVM Command Set Attributes 00:14:54.167 ========================== 00:14:54.167 Submission Queue Entry Size 00:14:54.167 Max: 64 00:14:54.167 Min: 64 00:14:54.167 Completion Queue Entry Size 00:14:54.167 Max: 16 00:14:54.167 Min: 16 00:14:54.167 Number of Namespaces: 256 00:14:54.167 Compare Command: Supported 00:14:54.167 Write Uncorrectable Command: Not Supported 00:14:54.167 Dataset Management Command: Supported 00:14:54.167 Write Zeroes Command: Supported 00:14:54.167 Set Features Save Field: Supported 00:14:54.167 Reservations: Not Supported 00:14:54.167 Timestamp: Supported 00:14:54.167 Copy: Supported 00:14:54.167 Volatile Write Cache: Present 00:14:54.167 Atomic Write Unit (Normal): 1 00:14:54.168 Atomic Write Unit (PFail): 1 00:14:54.168 Atomic Compare & Write Unit: 1 00:14:54.168 Fused Compare & Write: Not Supported 00:14:54.168 Scatter-Gather List 00:14:54.168 SGL Command Set: Supported 00:14:54.168 SGL Keyed: Not Supported 00:14:54.168 SGL Bit Bucket Descriptor: Not Supported 00:14:54.168 SGL Metadata Pointer: Not Supported 00:14:54.168 Oversized SGL: Not Supported 00:14:54.168 SGL Metadata Address: Not Supported 00:14:54.168 SGL Offset: Not Supported 00:14:54.168 Transport SGL Data Block: Not Supported 00:14:54.168 Replay Protected Memory Block: Not Supported 00:14:54.168 00:14:54.168 Firmware Slot Information 00:14:54.168 ========================= 00:14:54.168 Active slot: 1 00:14:54.168 Slot 1 Firmware Revision: 1.0 00:14:54.168 00:14:54.168 00:14:54.168 Commands Supported and Effects 00:14:54.168 ============================== 00:14:54.168 Admin Commands 00:14:54.168 -------------- 00:14:54.168 Delete I/O Submission Queue (00h): Supported 00:14:54.168 Create I/O Submission Queue (01h): Supported 00:14:54.168 Get Log Page (02h): Supported 00:14:54.168 Delete I/O Completion Queue (04h): Supported 00:14:54.168 Create I/O Completion Queue (05h): Supported 00:14:54.168 Identify (06h): Supported 00:14:54.168 Abort (08h): Supported 00:14:54.168 Set Features (09h): Supported 00:14:54.168 Get Features (0Ah): Supported 00:14:54.168 Asynchronous Event Request (0Ch): Supported 00:14:54.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:54.168 Directive Send (19h): Supported 00:14:54.168 Directive Receive (1Ah): Supported 00:14:54.168 Virtualization Management (1Ch): Supported 00:14:54.168 Doorbell Buffer Config (7Ch): Supported 00:14:54.168 Format NVM (80h): Supported LBA-Change 00:14:54.168 I/O Commands 00:14:54.168 ------------ 00:14:54.168 Flush (00h): Supported LBA-Change 00:14:54.168 Write (01h): Supported LBA-Change 00:14:54.168 Read (02h): Supported 00:14:54.168 Compare (05h): Supported 00:14:54.168 Write Zeroes (08h): Supported LBA-Change 00:14:54.168 Dataset Management (09h): Supported LBA-Change 00:14:54.168 Unknown (0Ch): Supported 00:14:54.168 Unknown (12h): Supported 00:14:54.168 Copy (19h): Supported LBA-Change 00:14:54.168 Unknown (1Dh): Supported LBA-Change 00:14:54.168 00:14:54.168 Error Log 00:14:54.168 ========= 00:14:54.168 00:14:54.168 Arbitration 00:14:54.168 =========== 00:14:54.168 Arbitration Burst: no limit 00:14:54.168 00:14:54.168 Power Management 00:14:54.168 ================ 00:14:54.168 Number of Power States: 1 00:14:54.168 Current Power State: Power State #0 00:14:54.168 Power State #0: 00:14:54.168 Max Power: 25.00 W 00:14:54.168 Non-Operational State: Operational 00:14:54.168 Entry Latency: 16 microseconds 00:14:54.168 Exit Latency: 4 microseconds 00:14:54.168 Relative Read Throughput: 0 00:14:54.168 Relative Read Latency: 0 00:14:54.168 Relative Write Throughput: 0 00:14:54.168 Relative Write Latency: 0 00:14:54.168 Idle Power: Not Reported 00:14:54.168 Active Power: Not Reported 00:14:54.168 Non-Operational Permissive Mode: Not Supported 00:14:54.168 00:14:54.168 Health Information 00:14:54.168 ================== 00:14:54.168 Critical Warnings: 00:14:54.168 Available Spare Space: OK 00:14:54.168 Temperature: OK 00:14:54.168 Device Reliability: OK 00:14:54.168 Read Only: No 00:14:54.168 Volatile Memory Backup: OK 00:14:54.168 Current Temperature: 323 Kelvin (50 Celsius) 00:14:54.168 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:54.168 Available Spare: 0% 00:14:54.168 Available Spare Threshold: 0% 00:14:54.168 Life Percentage Used: 0% 00:14:54.168 Data Units Read: 762 00:14:54.168 Data Units Written: 691 00:14:54.168 Host Read Commands: 32552 00:14:54.168 Host Write Commands: 31975 00:14:54.168 Controller Busy Time: 0 minutes 00:14:54.168 Power Cycles: 0 00:14:54.168 Power On Hours: 0 hours 00:14:54.168 Unsafe Shutdowns: 0 00:14:54.168 Unrecoverable Media Errors: 0 00:14:54.168 Lifetime Error Log Entries: 0 00:14:54.168 Warning Temperature Time: 0 minutes 00:14:54.168 Critical Temperature Time: 0 minutes 00:14:54.168 00:14:54.168 Number of Queues 00:14:54.168 ================ 00:14:54.168 Number of I/O Submission Queues: 64 00:14:54.168 Number of I/O Completion Queues: 64 00:14:54.168 00:14:54.168 ZNS Specific Controller Data 00:14:54.168 ============================ 00:14:54.168 Zone Append Size Limit: 0 00:14:54.168 00:14:54.168 00:14:54.168 Active Namespaces 00:14:54.168 ================= 00:14:54.168 Namespace ID:1 00:14:54.168 Error Recovery Timeout: Unlimited 00:14:54.168 Command Set Identifier: NVM (00h) 00:14:54.168 Deallocate: Supported 00:14:54.168 Deallocated/Unwritten Error: Supported 00:14:54.168 Deallocated Read Value: All 0x00 00:14:54.168 Deallocate in Write Zeroes: Not Supported 00:14:54.168 Deallocated Guard Field: 0xFFFF 00:14:54.168 Flush: Supported 00:14:54.168 Reservation: Not Supported 00:14:54.168 Namespace Sharing Capabilities: Multiple Controllers 00:14:54.168 Size (in LBAs): 262144 (1GiB) 00:14:54.168 Capacity (in LBAs): 262144 (1GiB) 00:14:54.168 Utilization (in LBAs): 262144 (1GiB) 00:14:54.168 Thin Provisioning: Not Supported 00:14:54.168 Per-NS Atomic Units: No 00:14:54.168 Maximum Single Source Range Length: 128 00:14:54.168 Maximum Copy Length: 128 00:14:54.168 Maximum Source Range Count: 128 00:14:54.168 NGUID/EUI64 Never Reused: No 00:14:54.168 Namespace Write Protected: No 00:14:54.168 Endurance group ID: 1 00:14:54.168 Number of LBA Formats: 8 00:14:54.168 Current LBA Format: LBA Format #04 00:14:54.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:54.168 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:54.168 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:54.168 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:54.168 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:54.168 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:54.168 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:54.168 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:54.168 00:14:54.168 Get Feature FDP: 00:14:54.168 ================ 00:14:54.168 Enabled: Yes 00:14:54.168 FDP configuration index: 0 00:14:54.168 00:14:54.168 FDP configurations log page 00:14:54.168 =========================== 00:14:54.168 Number of FDP configurations: 1 00:14:54.168 Version: 0 00:14:54.168 Size: 112 00:14:54.168 FDP Configuration Descriptor: 0 00:14:54.168 Descriptor Size: 96 00:14:54.168 Reclaim Group Identifier format: 2 00:14:54.168 FDP Volatile Write Cache: Not Present 00:14:54.168 FDP Configuration: Valid 00:14:54.168 Vendor Specific Size: 0 00:14:54.168 Number of Reclaim Groups: 2 00:14:54.168 Number of Recalim Unit Handles: 8 00:14:54.168 Max Placement Identifiers: 128 00:14:54.168 Number of Namespaces Suppprted: 256 00:14:54.168 Reclaim unit Nominal Size: 6000000 bytes 00:14:54.168 Estimated Reclaim Unit Time Limit: Not Reported 00:14:54.168 RUH Desc #000: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #001: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #002: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #003: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #004: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #005: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #006: RUH Type: Initially Isolated 00:14:54.168 RUH Desc #007: RUH Type: Initially Isolated 00:14:54.168 00:14:54.168 FDP reclaim unit handle usage log page 00:14:54.168 ====================================== 00:14:54.168 Number of Reclaim Unit Handles: 8 00:14:54.168 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:54.168 RUH Usage Desc #001: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #002: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #003: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #004: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #005: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #006: RUH Attributes: Unused 00:14:54.168 RUH Usage Desc #007: RUH Attributes: Unused 00:14:54.168 00:14:54.168 FDP statistics log page 00:14:54.168 ======================= 00:14:54.168 Host bytes with metadata written: 432054272 00:14:54.168 Media bytes with metadata written: 432099328 00:14:54.168 Media bytes erased: 0 00:14:54.168 00:14:54.168 FDP events log page 00:14:54.168 =================== 00:14:54.168 Number of FDP events: 0 00:14:54.168 00:14:54.168 NVM Specific Namespace Data 00:14:54.168 =========================== 00:14:54.168 Logical Block Storage Tag Mask: 0 00:14:54.168 Protection Information Capabilities: 00:14:54.168 16b Guard Protection Information Storage Tag Support: No 00:14:54.168 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:54.168 Storage Tag Check Read Support: No 00:14:54.168 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.168 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.168 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:54.169 00:14:54.169 real 0m1.989s 00:14:54.169 user 0m0.845s 00:14:54.169 sys 0m0.916s 00:14:54.169 13:48:40 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.169 13:48:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.169 ************************************ 00:14:54.169 END TEST nvme_identify 00:14:54.169 ************************************ 00:14:54.169 13:48:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:14:54.169 13:48:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:54.169 13:48:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.169 13:48:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.169 ************************************ 00:14:54.169 START TEST nvme_perf 00:14:54.169 ************************************ 00:14:54.169 13:48:41 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:14:54.169 13:48:41 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:14:55.587 Initializing NVMe Controllers 00:14:55.587 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:55.587 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:55.587 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:55.587 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:55.587 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:55.587 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:55.587 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:55.587 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:55.587 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:55.587 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:55.588 Initialization complete. Launching workers. 00:14:55.588 ======================================================== 00:14:55.588 Latency(us) 00:14:55.588 Device Information : IOPS MiB/s Average min max 00:14:55.588 PCIE (0000:00:10.0) NSID 1 from core 0: 10279.92 120.47 12496.57 8390.01 36096.97 00:14:55.588 PCIE (0000:00:11.0) NSID 1 from core 0: 10279.92 120.47 12475.66 8425.50 33520.01 00:14:55.588 PCIE (0000:00:13.0) NSID 1 from core 0: 10279.92 120.47 12455.39 8540.86 31334.83 00:14:55.588 PCIE (0000:00:12.0) NSID 1 from core 0: 10279.92 120.47 12434.44 8552.87 28930.00 00:14:55.588 PCIE (0000:00:12.0) NSID 2 from core 0: 10279.92 120.47 12413.10 8514.52 26409.14 00:14:55.588 PCIE (0000:00:12.0) NSID 3 from core 0: 10279.92 120.47 12391.55 8530.35 23853.43 00:14:55.588 ======================================================== 00:14:55.588 Total : 61679.55 722.81 12444.45 8390.01 36096.97 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8675.718us 00:14:55.588 10.00000% : 9237.455us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11983.726us 00:14:55.588 90.00000% : 20597.029us 00:14:55.588 95.00000% : 21720.503us 00:14:55.588 98.00000% : 22594.316us 00:14:55.588 99.00000% : 27462.705us 00:14:55.588 99.50000% : 34203.550us 00:14:55.588 99.90000% : 35701.516us 00:14:55.588 99.99000% : 36200.838us 00:14:55.588 99.99900% : 36200.838us 00:14:55.588 99.99990% : 36200.838us 00:14:55.588 99.99999% : 36200.838us 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8800.549us 00:14:55.588 10.00000% : 9299.870us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11921.310us 00:14:55.588 90.00000% : 20597.029us 00:14:55.588 95.00000% : 21595.672us 00:14:55.588 98.00000% : 22344.655us 00:14:55.588 99.00000% : 25340.587us 00:14:55.588 99.50000% : 31706.941us 00:14:55.588 99.90000% : 33204.907us 00:14:55.588 99.99000% : 33704.229us 00:14:55.588 99.99900% : 33704.229us 00:14:55.588 99.99990% : 33704.229us 00:14:55.588 99.99999% : 33704.229us 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8800.549us 00:14:55.588 10.00000% : 9299.870us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11921.310us 00:14:55.588 90.00000% : 20597.029us 00:14:55.588 95.00000% : 21595.672us 00:14:55.588 98.00000% : 22344.655us 00:14:55.588 99.00000% : 23468.130us 00:14:55.588 99.50000% : 29584.823us 00:14:55.588 99.90000% : 31082.789us 00:14:55.588 99.99000% : 31332.450us 00:14:55.588 99.99900% : 31457.280us 00:14:55.588 99.99990% : 31457.280us 00:14:55.588 99.99999% : 31457.280us 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8800.549us 00:14:55.588 10.00000% : 9299.870us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11983.726us 00:14:55.588 90.00000% : 20597.029us 00:14:55.588 95.00000% : 21470.842us 00:14:55.588 98.00000% : 22219.825us 00:14:55.588 99.00000% : 22594.316us 00:14:55.588 99.50000% : 27088.213us 00:14:55.588 99.90000% : 28586.179us 00:14:55.588 99.99000% : 28960.670us 00:14:55.588 99.99900% : 28960.670us 00:14:55.588 99.99990% : 28960.670us 00:14:55.588 99.99999% : 28960.670us 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8800.549us 00:14:55.588 10.00000% : 9299.870us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11983.726us 00:14:55.588 90.00000% : 20472.198us 00:14:55.588 95.00000% : 21470.842us 00:14:55.588 98.00000% : 22219.825us 00:14:55.588 99.00000% : 22594.316us 00:14:55.588 99.50000% : 24591.604us 00:14:55.588 99.90000% : 26089.570us 00:14:55.588 99.99000% : 26464.061us 00:14:55.588 99.99900% : 26464.061us 00:14:55.588 99.99990% : 26464.061us 00:14:55.588 99.99999% : 26464.061us 00:14:55.588 00:14:55.588 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:55.588 ================================================================================= 00:14:55.588 1.00000% : 8800.549us 00:14:55.588 10.00000% : 9299.870us 00:14:55.588 25.00000% : 9861.608us 00:14:55.588 50.00000% : 10610.590us 00:14:55.588 75.00000% : 11983.726us 00:14:55.588 90.00000% : 20472.198us 00:14:55.588 95.00000% : 21470.842us 00:14:55.588 98.00000% : 22094.994us 00:14:55.588 99.00000% : 22344.655us 00:14:55.588 99.50000% : 22719.147us 00:14:55.588 99.90000% : 23592.960us 00:14:55.588 99.99000% : 23842.621us 00:14:55.588 99.99900% : 23967.451us 00:14:55.588 99.99990% : 23967.451us 00:14:55.588 99.99999% : 23967.451us 00:14:55.588 00:14:55.588 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:55.588 ============================================================================== 00:14:55.588 Range in us Cumulative IO count 00:14:55.588 8363.642 - 8426.057: 0.0388% ( 4) 00:14:55.588 8426.057 - 8488.472: 0.1650% ( 13) 00:14:55.588 8488.472 - 8550.888: 0.2814% ( 12) 00:14:55.588 8550.888 - 8613.303: 0.6502% ( 38) 00:14:55.588 8613.303 - 8675.718: 1.2034% ( 57) 00:14:55.588 8675.718 - 8738.133: 1.8536% ( 67) 00:14:55.588 8738.133 - 8800.549: 2.7077% ( 88) 00:14:55.588 8800.549 - 8862.964: 3.5035% ( 82) 00:14:55.588 8862.964 - 8925.379: 4.4934% ( 102) 00:14:55.588 8925.379 - 8987.794: 5.5804% ( 112) 00:14:55.588 8987.794 - 9050.210: 6.6479% ( 110) 00:14:55.588 9050.210 - 9112.625: 7.8901% ( 128) 00:14:55.588 9112.625 - 9175.040: 9.1324% ( 128) 00:14:55.588 9175.040 - 9237.455: 10.4328% ( 134) 00:14:55.588 9237.455 - 9299.870: 11.7139% ( 132) 00:14:55.588 9299.870 - 9362.286: 13.1308% ( 146) 00:14:55.588 9362.286 - 9424.701: 14.5186% ( 143) 00:14:55.588 9424.701 - 9487.116: 15.9744% ( 150) 00:14:55.588 9487.116 - 9549.531: 17.5369% ( 161) 00:14:55.588 9549.531 - 9611.947: 19.1285% ( 164) 00:14:55.588 9611.947 - 9674.362: 20.7880% ( 171) 00:14:55.588 9674.362 - 9736.777: 22.4670% ( 173) 00:14:55.588 9736.777 - 9799.192: 24.3109% ( 190) 00:14:55.588 9799.192 - 9861.608: 26.0675% ( 181) 00:14:55.588 9861.608 - 9924.023: 27.7756% ( 176) 00:14:55.588 9924.023 - 9986.438: 29.6293% ( 191) 00:14:55.588 9986.438 - 10048.853: 31.6867% ( 212) 00:14:55.588 10048.853 - 10111.269: 33.8606% ( 224) 00:14:55.588 10111.269 - 10173.684: 35.9666% ( 217) 00:14:55.588 10173.684 - 10236.099: 38.1017% ( 220) 00:14:55.588 10236.099 - 10298.514: 40.1689% ( 213) 00:14:55.588 10298.514 - 10360.930: 42.1390% ( 203) 00:14:55.588 10360.930 - 10423.345: 44.0314% ( 195) 00:14:55.588 10423.345 - 10485.760: 46.1277% ( 216) 00:14:55.588 10485.760 - 10548.175: 48.1464% ( 208) 00:14:55.588 10548.175 - 10610.590: 50.2329% ( 215) 00:14:55.588 10610.590 - 10673.006: 52.3486% ( 218) 00:14:55.588 10673.006 - 10735.421: 54.5710% ( 229) 00:14:55.588 10735.421 - 10797.836: 56.5314% ( 202) 00:14:55.588 10797.836 - 10860.251: 58.4724% ( 200) 00:14:55.588 10860.251 - 10922.667: 60.3552% ( 194) 00:14:55.588 10922.667 - 10985.082: 62.0439% ( 174) 00:14:55.588 10985.082 - 11047.497: 63.5190% ( 152) 00:14:55.588 11047.497 - 11109.912: 64.8098% ( 133) 00:14:55.588 11109.912 - 11172.328: 65.8676% ( 109) 00:14:55.588 11172.328 - 11234.743: 66.8090% ( 97) 00:14:55.588 11234.743 - 11297.158: 67.7310% ( 95) 00:14:55.588 11297.158 - 11359.573: 68.5365% ( 83) 00:14:55.588 11359.573 - 11421.989: 69.4099% ( 90) 00:14:55.588 11421.989 - 11484.404: 70.1572% ( 77) 00:14:55.588 11484.404 - 11546.819: 70.9433% ( 81) 00:14:55.588 11546.819 - 11609.234: 71.5450% ( 62) 00:14:55.588 11609.234 - 11671.650: 72.3311% ( 81) 00:14:55.588 11671.650 - 11734.065: 73.0396% ( 73) 00:14:55.588 11734.065 - 11796.480: 73.7092% ( 69) 00:14:55.588 11796.480 - 11858.895: 74.3595% ( 67) 00:14:55.588 11858.895 - 11921.310: 74.9903% ( 65) 00:14:55.588 11921.310 - 11983.726: 75.4464% ( 47) 00:14:55.589 11983.726 - 12046.141: 75.8152% ( 38) 00:14:55.589 12046.141 - 12108.556: 76.1452% ( 34) 00:14:55.589 12108.556 - 12170.971: 76.4946% ( 36) 00:14:55.589 12170.971 - 12233.387: 76.7275% ( 24) 00:14:55.589 12233.387 - 12295.802: 76.9410% ( 22) 00:14:55.589 12295.802 - 12358.217: 77.1254% ( 19) 00:14:55.589 12358.217 - 12420.632: 77.2516% ( 13) 00:14:55.589 12420.632 - 12483.048: 77.3874% ( 14) 00:14:55.589 12483.048 - 12545.463: 77.5621% ( 18) 00:14:55.589 12545.463 - 12607.878: 77.6689% ( 11) 00:14:55.589 12607.878 - 12670.293: 77.7562% ( 9) 00:14:55.589 12670.293 - 12732.709: 77.8436% ( 9) 00:14:55.589 12732.709 - 12795.124: 77.9018% ( 6) 00:14:55.589 12795.124 - 12857.539: 77.9503% ( 5) 00:14:55.589 12857.539 - 12919.954: 77.9891% ( 4) 00:14:55.589 12919.954 - 12982.370: 78.0474% ( 6) 00:14:55.589 12982.370 - 13044.785: 78.0959% ( 5) 00:14:55.589 13044.785 - 13107.200: 78.1638% ( 7) 00:14:55.589 13107.200 - 13169.615: 78.2318% ( 7) 00:14:55.589 13169.615 - 13232.030: 78.2900% ( 6) 00:14:55.589 13232.030 - 13294.446: 78.3579% ( 7) 00:14:55.589 13294.446 - 13356.861: 78.4064% ( 5) 00:14:55.589 13356.861 - 13419.276: 78.4647% ( 6) 00:14:55.589 13419.276 - 13481.691: 78.5132% ( 5) 00:14:55.589 13481.691 - 13544.107: 78.5811% ( 7) 00:14:55.589 13544.107 - 13606.522: 78.6297% ( 5) 00:14:55.589 13606.522 - 13668.937: 78.7073% ( 8) 00:14:55.589 13668.937 - 13731.352: 78.7655% ( 6) 00:14:55.589 13731.352 - 13793.768: 78.7946% ( 3) 00:14:55.589 13793.768 - 13856.183: 78.8432% ( 5) 00:14:55.589 13856.183 - 13918.598: 78.8917% ( 5) 00:14:55.589 13918.598 - 13981.013: 78.9305% ( 4) 00:14:55.589 13981.013 - 14043.429: 78.9693% ( 4) 00:14:55.589 14043.429 - 14105.844: 79.0082% ( 4) 00:14:55.589 14105.844 - 14168.259: 79.0664% ( 6) 00:14:55.589 14168.259 - 14230.674: 79.0955% ( 3) 00:14:55.589 14230.674 - 14293.090: 79.1149% ( 2) 00:14:55.589 14293.090 - 14355.505: 79.1537% ( 4) 00:14:55.589 14355.505 - 14417.920: 79.1731% ( 2) 00:14:55.589 14417.920 - 14480.335: 79.1925% ( 2) 00:14:55.589 14480.335 - 14542.750: 79.2120% ( 2) 00:14:55.589 14542.750 - 14605.166: 79.2314% ( 2) 00:14:55.589 14605.166 - 14667.581: 79.2508% ( 2) 00:14:55.589 14667.581 - 14729.996: 79.2702% ( 2) 00:14:55.589 14729.996 - 14792.411: 79.3090% ( 4) 00:14:55.589 14792.411 - 14854.827: 79.3284% ( 2) 00:14:55.589 14854.827 - 14917.242: 79.3672% ( 4) 00:14:55.589 14917.242 - 14979.657: 79.4061% ( 4) 00:14:55.589 14979.657 - 15042.072: 79.4158% ( 1) 00:14:55.589 15042.072 - 15104.488: 79.4546% ( 4) 00:14:55.589 15104.488 - 15166.903: 79.5225% ( 7) 00:14:55.589 15166.903 - 15229.318: 79.5613% ( 4) 00:14:55.589 15229.318 - 15291.733: 79.5905% ( 3) 00:14:55.589 15291.733 - 15354.149: 79.6487% ( 6) 00:14:55.589 15354.149 - 15416.564: 79.6972% ( 5) 00:14:55.589 15416.564 - 15478.979: 79.7457% ( 5) 00:14:55.589 15478.979 - 15541.394: 79.7845% ( 4) 00:14:55.589 15541.394 - 15603.810: 79.8234% ( 4) 00:14:55.589 15603.810 - 15666.225: 79.8913% ( 7) 00:14:55.589 15666.225 - 15728.640: 79.9204% ( 3) 00:14:55.589 15728.640 - 15791.055: 79.9689% ( 5) 00:14:55.589 15791.055 - 15853.470: 80.0078% ( 4) 00:14:55.589 15853.470 - 15915.886: 80.0369% ( 3) 00:14:55.589 15915.886 - 15978.301: 80.0757% ( 4) 00:14:55.589 15978.301 - 16103.131: 80.1339% ( 6) 00:14:55.589 16103.131 - 16227.962: 80.2116% ( 8) 00:14:55.589 16227.962 - 16352.792: 80.2601% ( 5) 00:14:55.589 16352.792 - 16477.623: 80.3474% ( 9) 00:14:55.589 16477.623 - 16602.453: 80.4251% ( 8) 00:14:55.589 16602.453 - 16727.284: 80.5221% ( 10) 00:14:55.589 16727.284 - 16852.114: 80.6192% ( 10) 00:14:55.589 16852.114 - 16976.945: 80.7453% ( 13) 00:14:55.589 16976.945 - 17101.775: 80.8230% ( 8) 00:14:55.589 17101.775 - 17226.606: 80.9200% ( 10) 00:14:55.589 17226.606 - 17351.436: 81.0365% ( 12) 00:14:55.589 17351.436 - 17476.267: 81.1335% ( 10) 00:14:55.589 17476.267 - 17601.097: 81.2403% ( 11) 00:14:55.589 17601.097 - 17725.928: 81.3373% ( 10) 00:14:55.589 17725.928 - 17850.758: 81.4344% ( 10) 00:14:55.589 17850.758 - 17975.589: 81.5217% ( 9) 00:14:55.589 17975.589 - 18100.419: 81.6091% ( 9) 00:14:55.589 18100.419 - 18225.250: 81.7644% ( 16) 00:14:55.589 18225.250 - 18350.080: 81.9099% ( 15) 00:14:55.589 18350.080 - 18474.910: 82.0749% ( 17) 00:14:55.589 18474.910 - 18599.741: 82.2496% ( 18) 00:14:55.589 18599.741 - 18724.571: 82.5602% ( 32) 00:14:55.589 18724.571 - 18849.402: 82.9775% ( 43) 00:14:55.589 18849.402 - 18974.232: 83.4433% ( 48) 00:14:55.589 18974.232 - 19099.063: 83.9092% ( 48) 00:14:55.589 19099.063 - 19223.893: 84.4720% ( 58) 00:14:55.589 19223.893 - 19348.724: 84.9670% ( 51) 00:14:55.589 19348.724 - 19473.554: 85.5493% ( 60) 00:14:55.589 19473.554 - 19598.385: 86.1316% ( 60) 00:14:55.589 19598.385 - 19723.215: 86.6071% ( 49) 00:14:55.589 19723.215 - 19848.046: 87.2186% ( 63) 00:14:55.589 19848.046 - 19972.876: 87.6941% ( 49) 00:14:55.589 19972.876 - 20097.707: 88.2764% ( 60) 00:14:55.589 20097.707 - 20222.537: 88.8102% ( 55) 00:14:55.589 20222.537 - 20347.368: 89.3731% ( 58) 00:14:55.589 20347.368 - 20472.198: 89.8680% ( 51) 00:14:55.589 20472.198 - 20597.029: 90.4503% ( 60) 00:14:55.589 20597.029 - 20721.859: 91.0423% ( 61) 00:14:55.589 20721.859 - 20846.690: 91.6052% ( 58) 00:14:55.589 20846.690 - 20971.520: 92.1099% ( 52) 00:14:55.589 20971.520 - 21096.350: 92.6533% ( 56) 00:14:55.589 21096.350 - 21221.181: 93.1386% ( 50) 00:14:55.589 21221.181 - 21346.011: 93.6044% ( 48) 00:14:55.589 21346.011 - 21470.842: 94.1576% ( 57) 00:14:55.589 21470.842 - 21595.672: 94.6332% ( 49) 00:14:55.589 21595.672 - 21720.503: 95.1475% ( 53) 00:14:55.589 21720.503 - 21845.333: 95.5648% ( 43) 00:14:55.589 21845.333 - 21970.164: 96.0016% ( 45) 00:14:55.589 21970.164 - 22094.994: 96.4480% ( 46) 00:14:55.589 22094.994 - 22219.825: 96.8459% ( 41) 00:14:55.589 22219.825 - 22344.655: 97.2729% ( 44) 00:14:55.589 22344.655 - 22469.486: 97.7096% ( 45) 00:14:55.589 22469.486 - 22594.316: 98.0784% ( 38) 00:14:55.589 22594.316 - 22719.147: 98.3210% ( 25) 00:14:55.589 22719.147 - 22843.977: 98.4375% ( 12) 00:14:55.589 22843.977 - 22968.808: 98.5345% ( 10) 00:14:55.589 22968.808 - 23093.638: 98.5928% ( 6) 00:14:55.589 23093.638 - 23218.469: 98.6510% ( 6) 00:14:55.589 23218.469 - 23343.299: 98.6801% ( 3) 00:14:55.589 23343.299 - 23468.130: 98.6995% ( 2) 00:14:55.589 23468.130 - 23592.960: 98.7189% ( 2) 00:14:55.589 23592.960 - 23717.790: 98.7384% ( 2) 00:14:55.589 23717.790 - 23842.621: 98.7578% ( 2) 00:14:55.589 26339.230 - 26464.061: 98.7675% ( 1) 00:14:55.589 26464.061 - 26588.891: 98.7966% ( 3) 00:14:55.589 26588.891 - 26713.722: 98.8257% ( 3) 00:14:55.589 26713.722 - 26838.552: 98.8548% ( 3) 00:14:55.589 26838.552 - 26963.383: 98.8936% ( 4) 00:14:55.589 26963.383 - 27088.213: 98.9130% ( 2) 00:14:55.589 27088.213 - 27213.044: 98.9422% ( 3) 00:14:55.589 27213.044 - 27337.874: 98.9713% ( 3) 00:14:55.589 27337.874 - 27462.705: 99.0101% ( 4) 00:14:55.589 27462.705 - 27587.535: 99.0295% ( 2) 00:14:55.589 27587.535 - 27712.366: 99.0586% ( 3) 00:14:55.589 27712.366 - 27837.196: 99.0877% ( 3) 00:14:55.589 27837.196 - 27962.027: 99.1266% ( 4) 00:14:55.589 27962.027 - 28086.857: 99.1460% ( 2) 00:14:55.589 28086.857 - 28211.688: 99.1848% ( 4) 00:14:55.589 28211.688 - 28336.518: 99.2042% ( 2) 00:14:55.589 28336.518 - 28461.349: 99.2236% ( 2) 00:14:55.589 28461.349 - 28586.179: 99.2624% ( 4) 00:14:55.589 28586.179 - 28711.010: 99.3012% ( 4) 00:14:55.589 28711.010 - 28835.840: 99.3304% ( 3) 00:14:55.589 28835.840 - 28960.670: 99.3595% ( 3) 00:14:55.589 28960.670 - 29085.501: 99.3789% ( 2) 00:14:55.589 33454.568 - 33704.229: 99.3983% ( 2) 00:14:55.589 33704.229 - 33953.890: 99.4565% ( 6) 00:14:55.589 33953.890 - 34203.550: 99.5245% ( 7) 00:14:55.589 34203.550 - 34453.211: 99.5924% ( 7) 00:14:55.589 34453.211 - 34702.872: 99.6506% ( 6) 00:14:55.589 34702.872 - 34952.533: 99.7186% ( 7) 00:14:55.589 34952.533 - 35202.194: 99.7768% ( 6) 00:14:55.589 35202.194 - 35451.855: 99.8447% ( 7) 00:14:55.589 35451.855 - 35701.516: 99.9127% ( 7) 00:14:55.589 35701.516 - 35951.177: 99.9709% ( 6) 00:14:55.589 35951.177 - 36200.838: 100.0000% ( 3) 00:14:55.589 00:14:55.589 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:55.589 ============================================================================== 00:14:55.589 Range in us Cumulative IO count 00:14:55.589 8363.642 - 8426.057: 0.0097% ( 1) 00:14:55.589 8426.057 - 8488.472: 0.0388% ( 3) 00:14:55.589 8488.472 - 8550.888: 0.1165% ( 8) 00:14:55.589 8550.888 - 8613.303: 0.2232% ( 11) 00:14:55.589 8613.303 - 8675.718: 0.4852% ( 27) 00:14:55.590 8675.718 - 8738.133: 0.9220% ( 45) 00:14:55.590 8738.133 - 8800.549: 1.6110% ( 71) 00:14:55.590 8800.549 - 8862.964: 2.4651% ( 88) 00:14:55.590 8862.964 - 8925.379: 3.5035% ( 107) 00:14:55.590 8925.379 - 8987.794: 4.5516% ( 108) 00:14:55.590 8987.794 - 9050.210: 5.7162% ( 120) 00:14:55.590 9050.210 - 9112.625: 6.9391% ( 126) 00:14:55.590 9112.625 - 9175.040: 8.2880% ( 139) 00:14:55.590 9175.040 - 9237.455: 9.7923% ( 155) 00:14:55.590 9237.455 - 9299.870: 11.3451% ( 160) 00:14:55.590 9299.870 - 9362.286: 12.9270% ( 163) 00:14:55.590 9362.286 - 9424.701: 14.4313% ( 155) 00:14:55.590 9424.701 - 9487.116: 16.1588% ( 178) 00:14:55.590 9487.116 - 9549.531: 17.8474% ( 174) 00:14:55.590 9549.531 - 9611.947: 19.4585% ( 166) 00:14:55.590 9611.947 - 9674.362: 21.1277% ( 172) 00:14:55.590 9674.362 - 9736.777: 22.7387% ( 166) 00:14:55.590 9736.777 - 9799.192: 24.5050% ( 182) 00:14:55.590 9799.192 - 9861.608: 26.2811% ( 183) 00:14:55.590 9861.608 - 9924.023: 28.0765% ( 185) 00:14:55.590 9924.023 - 9986.438: 29.8234% ( 180) 00:14:55.590 9986.438 - 10048.853: 31.7935% ( 203) 00:14:55.590 10048.853 - 10111.269: 33.8121% ( 208) 00:14:55.590 10111.269 - 10173.684: 35.9084% ( 216) 00:14:55.590 10173.684 - 10236.099: 38.0823% ( 224) 00:14:55.590 10236.099 - 10298.514: 40.1883% ( 217) 00:14:55.590 10298.514 - 10360.930: 42.2943% ( 217) 00:14:55.590 10360.930 - 10423.345: 44.4682% ( 224) 00:14:55.590 10423.345 - 10485.760: 46.5353% ( 213) 00:14:55.590 10485.760 - 10548.175: 48.6607% ( 219) 00:14:55.590 10548.175 - 10610.590: 50.8929% ( 230) 00:14:55.590 10610.590 - 10673.006: 53.0474% ( 222) 00:14:55.590 10673.006 - 10735.421: 55.2116% ( 223) 00:14:55.590 10735.421 - 10797.836: 57.3175% ( 217) 00:14:55.590 10797.836 - 10860.251: 59.3168% ( 206) 00:14:55.590 10860.251 - 10922.667: 61.1316% ( 187) 00:14:55.590 10922.667 - 10985.082: 62.5194% ( 143) 00:14:55.590 10985.082 - 11047.497: 63.7714% ( 129) 00:14:55.590 11047.497 - 11109.912: 64.9262% ( 119) 00:14:55.590 11109.912 - 11172.328: 65.9356% ( 104) 00:14:55.590 11172.328 - 11234.743: 66.9255% ( 102) 00:14:55.590 11234.743 - 11297.158: 67.8377% ( 94) 00:14:55.590 11297.158 - 11359.573: 68.7306% ( 92) 00:14:55.590 11359.573 - 11421.989: 69.6234% ( 92) 00:14:55.590 11421.989 - 11484.404: 70.4581% ( 86) 00:14:55.590 11484.404 - 11546.819: 71.3024% ( 87) 00:14:55.590 11546.819 - 11609.234: 72.1079% ( 83) 00:14:55.590 11609.234 - 11671.650: 72.9328% ( 85) 00:14:55.590 11671.650 - 11734.065: 73.6413% ( 73) 00:14:55.590 11734.065 - 11796.480: 74.3401% ( 72) 00:14:55.590 11796.480 - 11858.895: 74.8932% ( 57) 00:14:55.590 11858.895 - 11921.310: 75.3591% ( 48) 00:14:55.590 11921.310 - 11983.726: 75.7085% ( 36) 00:14:55.590 11983.726 - 12046.141: 76.0093% ( 31) 00:14:55.590 12046.141 - 12108.556: 76.2811% ( 28) 00:14:55.590 12108.556 - 12170.971: 76.5528% ( 28) 00:14:55.590 12170.971 - 12233.387: 76.7566% ( 21) 00:14:55.590 12233.387 - 12295.802: 76.9507% ( 20) 00:14:55.590 12295.802 - 12358.217: 77.1545% ( 21) 00:14:55.590 12358.217 - 12420.632: 77.3583% ( 21) 00:14:55.590 12420.632 - 12483.048: 77.5330% ( 18) 00:14:55.590 12483.048 - 12545.463: 77.6883% ( 16) 00:14:55.590 12545.463 - 12607.878: 77.7950% ( 11) 00:14:55.590 12607.878 - 12670.293: 77.8727% ( 8) 00:14:55.590 12670.293 - 12732.709: 77.9503% ( 8) 00:14:55.590 12732.709 - 12795.124: 78.0182% ( 7) 00:14:55.590 12795.124 - 12857.539: 78.0765% ( 6) 00:14:55.590 12857.539 - 12919.954: 78.1444% ( 7) 00:14:55.590 12919.954 - 12982.370: 78.2026% ( 6) 00:14:55.590 12982.370 - 13044.785: 78.2512% ( 5) 00:14:55.590 13044.785 - 13107.200: 78.3191% ( 7) 00:14:55.590 13107.200 - 13169.615: 78.3676% ( 5) 00:14:55.590 13169.615 - 13232.030: 78.4064% ( 4) 00:14:55.590 13232.030 - 13294.446: 78.4453% ( 4) 00:14:55.590 13294.446 - 13356.861: 78.4841% ( 4) 00:14:55.590 13356.861 - 13419.276: 78.5229% ( 4) 00:14:55.590 13419.276 - 13481.691: 78.5617% ( 4) 00:14:55.590 13481.691 - 13544.107: 78.5908% ( 3) 00:14:55.590 13544.107 - 13606.522: 78.6297% ( 4) 00:14:55.590 13606.522 - 13668.937: 78.6588% ( 3) 00:14:55.590 13668.937 - 13731.352: 78.6782% ( 2) 00:14:55.590 13731.352 - 13793.768: 78.6879% ( 1) 00:14:55.590 13793.768 - 13856.183: 78.7073% ( 2) 00:14:55.590 13856.183 - 13918.598: 78.7267% ( 2) 00:14:55.590 13918.598 - 13981.013: 78.7461% ( 2) 00:14:55.590 13981.013 - 14043.429: 78.7655% ( 2) 00:14:55.590 14043.429 - 14105.844: 78.7752% ( 1) 00:14:55.590 14105.844 - 14168.259: 78.7946% ( 2) 00:14:55.590 14168.259 - 14230.674: 78.8141% ( 2) 00:14:55.590 14230.674 - 14293.090: 78.8335% ( 2) 00:14:55.590 14293.090 - 14355.505: 78.8529% ( 2) 00:14:55.590 14355.505 - 14417.920: 78.8723% ( 2) 00:14:55.590 14417.920 - 14480.335: 78.8820% ( 1) 00:14:55.590 14480.335 - 14542.750: 78.8917% ( 1) 00:14:55.590 14542.750 - 14605.166: 78.9111% ( 2) 00:14:55.590 14605.166 - 14667.581: 78.9305% ( 2) 00:14:55.590 14667.581 - 14729.996: 78.9499% ( 2) 00:14:55.590 14729.996 - 14792.411: 78.9790% ( 3) 00:14:55.590 14792.411 - 14854.827: 78.9887% ( 1) 00:14:55.590 14854.827 - 14917.242: 79.0082% ( 2) 00:14:55.590 14917.242 - 14979.657: 79.0276% ( 2) 00:14:55.590 14979.657 - 15042.072: 79.0470% ( 2) 00:14:55.590 15042.072 - 15104.488: 79.0664% ( 2) 00:14:55.590 15104.488 - 15166.903: 79.0858% ( 2) 00:14:55.590 15166.903 - 15229.318: 79.1052% ( 2) 00:14:55.590 15229.318 - 15291.733: 79.1246% ( 2) 00:14:55.590 15291.733 - 15354.149: 79.1440% ( 2) 00:14:55.590 15354.149 - 15416.564: 79.1634% ( 2) 00:14:55.590 15416.564 - 15478.979: 79.1828% ( 2) 00:14:55.590 15478.979 - 15541.394: 79.2217% ( 4) 00:14:55.590 15541.394 - 15603.810: 79.2508% ( 3) 00:14:55.590 15603.810 - 15666.225: 79.2799% ( 3) 00:14:55.590 15666.225 - 15728.640: 79.3187% ( 4) 00:14:55.590 15728.640 - 15791.055: 79.3575% ( 4) 00:14:55.590 15791.055 - 15853.470: 79.3964% ( 4) 00:14:55.590 15853.470 - 15915.886: 79.4449% ( 5) 00:14:55.590 15915.886 - 15978.301: 79.5128% ( 7) 00:14:55.590 15978.301 - 16103.131: 79.6487% ( 14) 00:14:55.590 16103.131 - 16227.962: 79.8040% ( 16) 00:14:55.590 16227.962 - 16352.792: 79.9301% ( 13) 00:14:55.590 16352.792 - 16477.623: 80.1145% ( 19) 00:14:55.590 16477.623 - 16602.453: 80.2795% ( 17) 00:14:55.590 16602.453 - 16727.284: 80.4251% ( 15) 00:14:55.590 16727.284 - 16852.114: 80.5609% ( 14) 00:14:55.590 16852.114 - 16976.945: 80.7065% ( 15) 00:14:55.590 16976.945 - 17101.775: 80.8521% ( 15) 00:14:55.590 17101.775 - 17226.606: 81.0074% ( 16) 00:14:55.590 17226.606 - 17351.436: 81.1432% ( 14) 00:14:55.590 17351.436 - 17476.267: 81.3082% ( 17) 00:14:55.590 17476.267 - 17601.097: 81.4441% ( 14) 00:14:55.590 17601.097 - 17725.928: 81.6188% ( 18) 00:14:55.590 17725.928 - 17850.758: 81.7352% ( 12) 00:14:55.590 17850.758 - 17975.589: 81.8711% ( 14) 00:14:55.590 17975.589 - 18100.419: 81.9779% ( 11) 00:14:55.590 18100.419 - 18225.250: 82.0943% ( 12) 00:14:55.590 18225.250 - 18350.080: 82.2108% ( 12) 00:14:55.590 18350.080 - 18474.910: 82.3273% ( 12) 00:14:55.590 18474.910 - 18599.741: 82.4049% ( 8) 00:14:55.590 18599.741 - 18724.571: 82.5116% ( 11) 00:14:55.590 18724.571 - 18849.402: 82.6087% ( 10) 00:14:55.590 18849.402 - 18974.232: 82.7640% ( 16) 00:14:55.590 18974.232 - 19099.063: 83.0551% ( 30) 00:14:55.590 19099.063 - 19223.893: 83.4530% ( 41) 00:14:55.590 19223.893 - 19348.724: 83.8800% ( 44) 00:14:55.590 19348.724 - 19473.554: 84.4138% ( 55) 00:14:55.590 19473.554 - 19598.385: 84.9961% ( 60) 00:14:55.590 19598.385 - 19723.215: 85.5784% ( 60) 00:14:55.590 19723.215 - 19848.046: 86.2092% ( 65) 00:14:55.590 19848.046 - 19972.876: 86.8886% ( 70) 00:14:55.590 19972.876 - 20097.707: 87.5679% ( 70) 00:14:55.590 20097.707 - 20222.537: 88.1891% ( 64) 00:14:55.590 20222.537 - 20347.368: 88.8102% ( 64) 00:14:55.590 20347.368 - 20472.198: 89.4701% ( 68) 00:14:55.590 20472.198 - 20597.029: 90.1495% ( 70) 00:14:55.590 20597.029 - 20721.859: 90.7900% ( 66) 00:14:55.590 20721.859 - 20846.690: 91.4305% ( 66) 00:14:55.590 20846.690 - 20971.520: 92.0516% ( 64) 00:14:55.590 20971.520 - 21096.350: 92.6727% ( 64) 00:14:55.590 21096.350 - 21221.181: 93.3036% ( 65) 00:14:55.590 21221.181 - 21346.011: 93.9344% ( 65) 00:14:55.590 21346.011 - 21470.842: 94.5555% ( 64) 00:14:55.590 21470.842 - 21595.672: 95.1281% ( 59) 00:14:55.590 21595.672 - 21720.503: 95.7298% ( 62) 00:14:55.590 21720.503 - 21845.333: 96.2927% ( 58) 00:14:55.590 21845.333 - 21970.164: 96.8168% ( 54) 00:14:55.590 21970.164 - 22094.994: 97.3797% ( 58) 00:14:55.590 22094.994 - 22219.825: 97.8164% ( 45) 00:14:55.590 22219.825 - 22344.655: 98.1949% ( 39) 00:14:55.590 22344.655 - 22469.486: 98.4084% ( 22) 00:14:55.590 22469.486 - 22594.316: 98.5540% ( 15) 00:14:55.590 22594.316 - 22719.147: 98.6219% ( 7) 00:14:55.590 22719.147 - 22843.977: 98.6898% ( 7) 00:14:55.590 22843.977 - 22968.808: 98.7286% ( 4) 00:14:55.590 22968.808 - 23093.638: 98.7578% ( 3) 00:14:55.590 24341.943 - 24466.773: 98.7675% ( 1) 00:14:55.590 24466.773 - 24591.604: 98.7966% ( 3) 00:14:55.590 24591.604 - 24716.434: 98.8354% ( 4) 00:14:55.590 24716.434 - 24841.265: 98.8645% ( 3) 00:14:55.590 24841.265 - 24966.095: 98.9033% ( 4) 00:14:55.590 24966.095 - 25090.926: 98.9325% ( 3) 00:14:55.590 25090.926 - 25215.756: 98.9616% ( 3) 00:14:55.590 25215.756 - 25340.587: 99.0004% ( 4) 00:14:55.590 25340.587 - 25465.417: 99.0295% ( 3) 00:14:55.590 25465.417 - 25590.248: 99.0586% ( 3) 00:14:55.590 25590.248 - 25715.078: 99.0877% ( 3) 00:14:55.591 25715.078 - 25839.909: 99.1266% ( 4) 00:14:55.591 25839.909 - 25964.739: 99.1557% ( 3) 00:14:55.591 25964.739 - 26089.570: 99.1945% ( 4) 00:14:55.591 26089.570 - 26214.400: 99.2236% ( 3) 00:14:55.591 26214.400 - 26339.230: 99.2624% ( 4) 00:14:55.591 26339.230 - 26464.061: 99.3012% ( 4) 00:14:55.591 26464.061 - 26588.891: 99.3304% ( 3) 00:14:55.591 26588.891 - 26713.722: 99.3595% ( 3) 00:14:55.591 26713.722 - 26838.552: 99.3789% ( 2) 00:14:55.591 31082.789 - 31207.619: 99.4080% ( 3) 00:14:55.591 31207.619 - 31332.450: 99.4468% ( 4) 00:14:55.591 31332.450 - 31457.280: 99.4662% ( 2) 00:14:55.591 31457.280 - 31582.110: 99.4856% ( 2) 00:14:55.591 31582.110 - 31706.941: 99.5245% ( 4) 00:14:55.591 31706.941 - 31831.771: 99.5439% ( 2) 00:14:55.591 31831.771 - 31956.602: 99.5827% ( 4) 00:14:55.591 31956.602 - 32206.263: 99.6506% ( 7) 00:14:55.591 32206.263 - 32455.924: 99.7186% ( 7) 00:14:55.591 32455.924 - 32705.585: 99.7865% ( 7) 00:14:55.591 32705.585 - 32955.246: 99.8447% ( 6) 00:14:55.591 32955.246 - 33204.907: 99.9127% ( 7) 00:14:55.591 33204.907 - 33454.568: 99.9806% ( 7) 00:14:55.591 33454.568 - 33704.229: 100.0000% ( 2) 00:14:55.591 00:14:55.591 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:55.591 ============================================================================== 00:14:55.591 Range in us Cumulative IO count 00:14:55.591 8488.472 - 8550.888: 0.0194% ( 2) 00:14:55.591 8550.888 - 8613.303: 0.1553% ( 14) 00:14:55.591 8613.303 - 8675.718: 0.3979% ( 25) 00:14:55.591 8675.718 - 8738.133: 0.9026% ( 52) 00:14:55.591 8738.133 - 8800.549: 1.5528% ( 67) 00:14:55.591 8800.549 - 8862.964: 2.3292% ( 80) 00:14:55.591 8862.964 - 8925.379: 3.2123% ( 91) 00:14:55.591 8925.379 - 8987.794: 4.3672% ( 119) 00:14:55.591 8987.794 - 9050.210: 5.5998% ( 127) 00:14:55.591 9050.210 - 9112.625: 6.9196% ( 136) 00:14:55.591 9112.625 - 9175.040: 8.2298% ( 135) 00:14:55.591 9175.040 - 9237.455: 9.7050% ( 152) 00:14:55.591 9237.455 - 9299.870: 11.2286% ( 157) 00:14:55.591 9299.870 - 9362.286: 12.7911% ( 161) 00:14:55.591 9362.286 - 9424.701: 14.3439% ( 160) 00:14:55.591 9424.701 - 9487.116: 15.9938% ( 170) 00:14:55.591 9487.116 - 9549.531: 17.6533% ( 171) 00:14:55.591 9549.531 - 9611.947: 19.2547% ( 165) 00:14:55.591 9611.947 - 9674.362: 20.8657% ( 166) 00:14:55.591 9674.362 - 9736.777: 22.5835% ( 177) 00:14:55.591 9736.777 - 9799.192: 24.2042% ( 167) 00:14:55.591 9799.192 - 9861.608: 25.9220% ( 177) 00:14:55.591 9861.608 - 9924.023: 27.6300% ( 176) 00:14:55.591 9924.023 - 9986.438: 29.5516% ( 198) 00:14:55.591 9986.438 - 10048.853: 31.4732% ( 198) 00:14:55.591 10048.853 - 10111.269: 33.5889% ( 218) 00:14:55.591 10111.269 - 10173.684: 35.7240% ( 220) 00:14:55.591 10173.684 - 10236.099: 37.8591% ( 220) 00:14:55.591 10236.099 - 10298.514: 39.9651% ( 217) 00:14:55.591 10298.514 - 10360.930: 42.1681% ( 227) 00:14:55.591 10360.930 - 10423.345: 44.1576% ( 205) 00:14:55.591 10423.345 - 10485.760: 46.2830% ( 219) 00:14:55.591 10485.760 - 10548.175: 48.3502% ( 213) 00:14:55.591 10548.175 - 10610.590: 50.4950% ( 221) 00:14:55.591 10610.590 - 10673.006: 52.7271% ( 230) 00:14:55.591 10673.006 - 10735.421: 54.9592% ( 230) 00:14:55.591 10735.421 - 10797.836: 57.0652% ( 217) 00:14:55.591 10797.836 - 10860.251: 59.0936% ( 209) 00:14:55.591 10860.251 - 10922.667: 61.0054% ( 197) 00:14:55.591 10922.667 - 10985.082: 62.5388% ( 158) 00:14:55.591 10985.082 - 11047.497: 63.7325% ( 123) 00:14:55.591 11047.497 - 11109.912: 64.8777% ( 118) 00:14:55.591 11109.912 - 11172.328: 65.8579% ( 101) 00:14:55.591 11172.328 - 11234.743: 66.7605% ( 93) 00:14:55.591 11234.743 - 11297.158: 67.6727% ( 94) 00:14:55.591 11297.158 - 11359.573: 68.5268% ( 88) 00:14:55.591 11359.573 - 11421.989: 69.4391% ( 94) 00:14:55.591 11421.989 - 11484.404: 70.2640% ( 85) 00:14:55.591 11484.404 - 11546.819: 71.1957% ( 96) 00:14:55.591 11546.819 - 11609.234: 71.9623% ( 79) 00:14:55.591 11609.234 - 11671.650: 72.7387% ( 80) 00:14:55.591 11671.650 - 11734.065: 73.4860% ( 77) 00:14:55.591 11734.065 - 11796.480: 74.1363% ( 67) 00:14:55.591 11796.480 - 11858.895: 74.6506% ( 53) 00:14:55.591 11858.895 - 11921.310: 75.1553% ( 52) 00:14:55.591 11921.310 - 11983.726: 75.5726% ( 43) 00:14:55.591 11983.726 - 12046.141: 75.9123% ( 35) 00:14:55.591 12046.141 - 12108.556: 76.2131% ( 31) 00:14:55.591 12108.556 - 12170.971: 76.4752% ( 27) 00:14:55.591 12170.971 - 12233.387: 76.6984% ( 23) 00:14:55.591 12233.387 - 12295.802: 76.9119% ( 22) 00:14:55.591 12295.802 - 12358.217: 77.1157% ( 21) 00:14:55.591 12358.217 - 12420.632: 77.3098% ( 20) 00:14:55.591 12420.632 - 12483.048: 77.5330% ( 23) 00:14:55.591 12483.048 - 12545.463: 77.7174% ( 19) 00:14:55.591 12545.463 - 12607.878: 77.8533% ( 14) 00:14:55.591 12607.878 - 12670.293: 77.9503% ( 10) 00:14:55.591 12670.293 - 12732.709: 78.0182% ( 7) 00:14:55.591 12732.709 - 12795.124: 78.0765% ( 6) 00:14:55.591 12795.124 - 12857.539: 78.1250% ( 5) 00:14:55.591 12857.539 - 12919.954: 78.1638% ( 4) 00:14:55.591 12919.954 - 12982.370: 78.1929% ( 3) 00:14:55.591 12982.370 - 13044.785: 78.2318% ( 4) 00:14:55.591 13044.785 - 13107.200: 78.2706% ( 4) 00:14:55.591 13107.200 - 13169.615: 78.3191% ( 5) 00:14:55.591 13169.615 - 13232.030: 78.3579% ( 4) 00:14:55.591 13232.030 - 13294.446: 78.3967% ( 4) 00:14:55.591 13294.446 - 13356.861: 78.4259% ( 3) 00:14:55.591 13356.861 - 13419.276: 78.4841% ( 6) 00:14:55.591 13419.276 - 13481.691: 78.5132% ( 3) 00:14:55.591 13481.691 - 13544.107: 78.5520% ( 4) 00:14:55.591 13544.107 - 13606.522: 78.5908% ( 4) 00:14:55.591 13606.522 - 13668.937: 78.6394% ( 5) 00:14:55.591 13668.937 - 13731.352: 78.6685% ( 3) 00:14:55.591 13731.352 - 13793.768: 78.7170% ( 5) 00:14:55.591 13793.768 - 13856.183: 78.7267% ( 1) 00:14:55.591 13856.183 - 13918.598: 78.7461% ( 2) 00:14:55.591 13918.598 - 13981.013: 78.7655% ( 2) 00:14:55.591 13981.013 - 14043.429: 78.7946% ( 3) 00:14:55.591 14043.429 - 14105.844: 78.8141% ( 2) 00:14:55.591 14105.844 - 14168.259: 78.8335% ( 2) 00:14:55.591 14168.259 - 14230.674: 78.8432% ( 1) 00:14:55.591 14230.674 - 14293.090: 78.8626% ( 2) 00:14:55.591 14293.090 - 14355.505: 78.8820% ( 2) 00:14:55.591 14854.827 - 14917.242: 78.8917% ( 1) 00:14:55.591 14917.242 - 14979.657: 78.9305% ( 4) 00:14:55.591 14979.657 - 15042.072: 78.9596% ( 3) 00:14:55.591 15042.072 - 15104.488: 79.0082% ( 5) 00:14:55.591 15104.488 - 15166.903: 79.0373% ( 3) 00:14:55.591 15166.903 - 15229.318: 79.0761% ( 4) 00:14:55.591 15229.318 - 15291.733: 79.1052% ( 3) 00:14:55.591 15291.733 - 15354.149: 79.1343% ( 3) 00:14:55.591 15354.149 - 15416.564: 79.1731% ( 4) 00:14:55.591 15416.564 - 15478.979: 79.2023% ( 3) 00:14:55.591 15478.979 - 15541.394: 79.2411% ( 4) 00:14:55.591 15541.394 - 15603.810: 79.3090% ( 7) 00:14:55.591 15603.810 - 15666.225: 79.3866% ( 8) 00:14:55.591 15666.225 - 15728.640: 79.4449% ( 6) 00:14:55.591 15728.640 - 15791.055: 79.5128% ( 7) 00:14:55.591 15791.055 - 15853.470: 79.5905% ( 8) 00:14:55.591 15853.470 - 15915.886: 79.6487% ( 6) 00:14:55.591 15915.886 - 15978.301: 79.7263% ( 8) 00:14:55.591 15978.301 - 16103.131: 79.8525% ( 13) 00:14:55.591 16103.131 - 16227.962: 79.9884% ( 14) 00:14:55.591 16227.962 - 16352.792: 80.1339% ( 15) 00:14:55.591 16352.792 - 16477.623: 80.2795% ( 15) 00:14:55.591 16477.623 - 16602.453: 80.4348% ( 16) 00:14:55.591 16602.453 - 16727.284: 80.6192% ( 19) 00:14:55.591 16727.284 - 16852.114: 80.7939% ( 18) 00:14:55.591 16852.114 - 16976.945: 80.9783% ( 19) 00:14:55.591 16976.945 - 17101.775: 81.1530% ( 18) 00:14:55.591 17101.775 - 17226.606: 81.2888% ( 14) 00:14:55.591 17226.606 - 17351.436: 81.4150% ( 13) 00:14:55.591 17351.436 - 17476.267: 81.5411% ( 13) 00:14:55.591 17476.267 - 17601.097: 81.6673% ( 13) 00:14:55.591 17601.097 - 17725.928: 81.7741% ( 11) 00:14:55.591 17725.928 - 17850.758: 81.8323% ( 6) 00:14:55.591 17850.758 - 17975.589: 81.9002% ( 7) 00:14:55.591 17975.589 - 18100.419: 81.9488% ( 5) 00:14:55.591 18100.419 - 18225.250: 82.0070% ( 6) 00:14:55.591 18225.250 - 18350.080: 82.1040% ( 10) 00:14:55.591 18350.080 - 18474.910: 82.2011% ( 10) 00:14:55.591 18474.910 - 18599.741: 82.3078% ( 11) 00:14:55.591 18599.741 - 18724.571: 82.4049% ( 10) 00:14:55.591 18724.571 - 18849.402: 82.5019% ( 10) 00:14:55.591 18849.402 - 18974.232: 82.6572% ( 16) 00:14:55.591 18974.232 - 19099.063: 82.9095% ( 26) 00:14:55.591 19099.063 - 19223.893: 83.3463% ( 45) 00:14:55.591 19223.893 - 19348.724: 83.8121% ( 48) 00:14:55.591 19348.724 - 19473.554: 84.3265% ( 53) 00:14:55.591 19473.554 - 19598.385: 84.9088% ( 60) 00:14:55.591 19598.385 - 19723.215: 85.5105% ( 62) 00:14:55.591 19723.215 - 19848.046: 86.1510% ( 66) 00:14:55.591 19848.046 - 19972.876: 86.7624% ( 63) 00:14:55.591 19972.876 - 20097.707: 87.4224% ( 68) 00:14:55.591 20097.707 - 20222.537: 88.0338% ( 63) 00:14:55.591 20222.537 - 20347.368: 88.7325% ( 72) 00:14:55.591 20347.368 - 20472.198: 89.3536% ( 64) 00:14:55.591 20472.198 - 20597.029: 90.0427% ( 71) 00:14:55.591 20597.029 - 20721.859: 90.7026% ( 68) 00:14:55.591 20721.859 - 20846.690: 91.3820% ( 70) 00:14:55.591 20846.690 - 20971.520: 91.9934% ( 63) 00:14:55.591 20971.520 - 21096.350: 92.6533% ( 68) 00:14:55.591 21096.350 - 21221.181: 93.2745% ( 64) 00:14:55.591 21221.181 - 21346.011: 93.9247% ( 67) 00:14:55.592 21346.011 - 21470.842: 94.5167% ( 61) 00:14:55.592 21470.842 - 21595.672: 95.1184% ( 62) 00:14:55.592 21595.672 - 21720.503: 95.7104% ( 61) 00:14:55.592 21720.503 - 21845.333: 96.3121% ( 62) 00:14:55.592 21845.333 - 21970.164: 96.8847% ( 59) 00:14:55.592 21970.164 - 22094.994: 97.4185% ( 55) 00:14:55.592 22094.994 - 22219.825: 97.8843% ( 48) 00:14:55.592 22219.825 - 22344.655: 98.2240% ( 35) 00:14:55.592 22344.655 - 22469.486: 98.3890% ( 17) 00:14:55.592 22469.486 - 22594.316: 98.5345% ( 15) 00:14:55.592 22594.316 - 22719.147: 98.6607% ( 13) 00:14:55.592 22719.147 - 22843.977: 98.7578% ( 10) 00:14:55.592 22843.977 - 22968.808: 98.8160% ( 6) 00:14:55.592 22968.808 - 23093.638: 98.8645% ( 5) 00:14:55.592 23093.638 - 23218.469: 98.9325% ( 7) 00:14:55.592 23218.469 - 23343.299: 98.9810% ( 5) 00:14:55.592 23343.299 - 23468.130: 99.0295% ( 5) 00:14:55.592 23468.130 - 23592.960: 99.0683% ( 4) 00:14:55.592 23592.960 - 23717.790: 99.1071% ( 4) 00:14:55.592 23717.790 - 23842.621: 99.1363% ( 3) 00:14:55.592 23842.621 - 23967.451: 99.1654% ( 3) 00:14:55.592 23967.451 - 24092.282: 99.1945% ( 3) 00:14:55.592 24092.282 - 24217.112: 99.2333% ( 4) 00:14:55.592 24217.112 - 24341.943: 99.2721% ( 4) 00:14:55.592 24341.943 - 24466.773: 99.3012% ( 3) 00:14:55.592 24466.773 - 24591.604: 99.3401% ( 4) 00:14:55.592 24591.604 - 24716.434: 99.3789% ( 4) 00:14:55.592 28960.670 - 29085.501: 99.3886% ( 1) 00:14:55.592 29085.501 - 29210.331: 99.4274% ( 4) 00:14:55.592 29210.331 - 29335.162: 99.4662% ( 4) 00:14:55.592 29335.162 - 29459.992: 99.4953% ( 3) 00:14:55.592 29459.992 - 29584.823: 99.5245% ( 3) 00:14:55.592 29584.823 - 29709.653: 99.5536% ( 3) 00:14:55.592 29709.653 - 29834.484: 99.5827% ( 3) 00:14:55.592 29834.484 - 29959.314: 99.6118% ( 3) 00:14:55.592 29959.314 - 30084.145: 99.6506% ( 4) 00:14:55.592 30084.145 - 30208.975: 99.6797% ( 3) 00:14:55.592 30208.975 - 30333.806: 99.7186% ( 4) 00:14:55.592 30333.806 - 30458.636: 99.7574% ( 4) 00:14:55.592 30458.636 - 30583.467: 99.7768% ( 2) 00:14:55.592 30583.467 - 30708.297: 99.8156% ( 4) 00:14:55.592 30708.297 - 30833.128: 99.8544% ( 4) 00:14:55.592 30833.128 - 30957.958: 99.8932% ( 4) 00:14:55.592 30957.958 - 31082.789: 99.9321% ( 4) 00:14:55.592 31082.789 - 31207.619: 99.9612% ( 3) 00:14:55.592 31207.619 - 31332.450: 99.9903% ( 3) 00:14:55.592 31332.450 - 31457.280: 100.0000% ( 1) 00:14:55.592 00:14:55.592 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:55.592 ============================================================================== 00:14:55.592 Range in us Cumulative IO count 00:14:55.592 8550.888 - 8613.303: 0.1262% ( 13) 00:14:55.592 8613.303 - 8675.718: 0.3300% ( 21) 00:14:55.592 8675.718 - 8738.133: 0.7473% ( 43) 00:14:55.592 8738.133 - 8800.549: 1.4655% ( 74) 00:14:55.592 8800.549 - 8862.964: 2.3098% ( 87) 00:14:55.592 8862.964 - 8925.379: 3.2706% ( 99) 00:14:55.592 8925.379 - 8987.794: 4.3575% ( 112) 00:14:55.592 8987.794 - 9050.210: 5.6483% ( 133) 00:14:55.592 9050.210 - 9112.625: 6.9391% ( 133) 00:14:55.592 9112.625 - 9175.040: 8.2783% ( 138) 00:14:55.592 9175.040 - 9237.455: 9.7826% ( 155) 00:14:55.592 9237.455 - 9299.870: 11.2578% ( 152) 00:14:55.592 9299.870 - 9362.286: 12.7329% ( 152) 00:14:55.592 9362.286 - 9424.701: 14.4022% ( 172) 00:14:55.592 9424.701 - 9487.116: 16.0229% ( 167) 00:14:55.592 9487.116 - 9549.531: 17.7310% ( 176) 00:14:55.592 9549.531 - 9611.947: 19.3323% ( 165) 00:14:55.592 9611.947 - 9674.362: 21.0307% ( 175) 00:14:55.592 9674.362 - 9736.777: 22.6999% ( 172) 00:14:55.592 9736.777 - 9799.192: 24.4274% ( 178) 00:14:55.592 9799.192 - 9861.608: 26.0870% ( 171) 00:14:55.592 9861.608 - 9924.023: 27.8339% ( 180) 00:14:55.592 9924.023 - 9986.438: 29.5225% ( 174) 00:14:55.592 9986.438 - 10048.853: 31.4732% ( 201) 00:14:55.592 10048.853 - 10111.269: 33.5792% ( 217) 00:14:55.592 10111.269 - 10173.684: 35.7337% ( 222) 00:14:55.592 10173.684 - 10236.099: 37.9173% ( 225) 00:14:55.592 10236.099 - 10298.514: 40.0815% ( 223) 00:14:55.592 10298.514 - 10360.930: 42.2069% ( 219) 00:14:55.592 10360.930 - 10423.345: 44.3226% ( 218) 00:14:55.592 10423.345 - 10485.760: 46.4577% ( 220) 00:14:55.592 10485.760 - 10548.175: 48.6122% ( 222) 00:14:55.592 10548.175 - 10610.590: 50.7667% ( 222) 00:14:55.592 10610.590 - 10673.006: 53.0377% ( 234) 00:14:55.592 10673.006 - 10735.421: 55.2795% ( 231) 00:14:55.592 10735.421 - 10797.836: 57.3661% ( 215) 00:14:55.592 10797.836 - 10860.251: 59.3556% ( 205) 00:14:55.592 10860.251 - 10922.667: 61.0734% ( 177) 00:14:55.592 10922.667 - 10985.082: 62.6068% ( 158) 00:14:55.592 10985.082 - 11047.497: 63.8587% ( 129) 00:14:55.592 11047.497 - 11109.912: 65.1009% ( 128) 00:14:55.592 11109.912 - 11172.328: 66.0229% ( 95) 00:14:55.592 11172.328 - 11234.743: 66.9546% ( 96) 00:14:55.592 11234.743 - 11297.158: 67.7989% ( 87) 00:14:55.592 11297.158 - 11359.573: 68.6335% ( 86) 00:14:55.592 11359.573 - 11421.989: 69.5458% ( 94) 00:14:55.592 11421.989 - 11484.404: 70.3610% ( 84) 00:14:55.592 11484.404 - 11546.819: 71.1957% ( 86) 00:14:55.592 11546.819 - 11609.234: 71.9429% ( 77) 00:14:55.592 11609.234 - 11671.650: 72.7582% ( 84) 00:14:55.592 11671.650 - 11734.065: 73.4472% ( 71) 00:14:55.592 11734.065 - 11796.480: 74.0780% ( 65) 00:14:55.592 11796.480 - 11858.895: 74.5827% ( 52) 00:14:55.592 11858.895 - 11921.310: 74.9903% ( 42) 00:14:55.592 11921.310 - 11983.726: 75.3591% ( 38) 00:14:55.592 11983.726 - 12046.141: 75.7085% ( 36) 00:14:55.592 12046.141 - 12108.556: 76.0093% ( 31) 00:14:55.592 12108.556 - 12170.971: 76.2519% ( 25) 00:14:55.592 12170.971 - 12233.387: 76.4460% ( 20) 00:14:55.592 12233.387 - 12295.802: 76.6304% ( 19) 00:14:55.592 12295.802 - 12358.217: 76.8051% ( 18) 00:14:55.592 12358.217 - 12420.632: 76.9895% ( 19) 00:14:55.592 12420.632 - 12483.048: 77.1545% ( 17) 00:14:55.592 12483.048 - 12545.463: 77.3292% ( 18) 00:14:55.592 12545.463 - 12607.878: 77.4165% ( 9) 00:14:55.592 12607.878 - 12670.293: 77.4554% ( 4) 00:14:55.592 12670.293 - 12732.709: 77.4845% ( 3) 00:14:55.592 12732.709 - 12795.124: 77.5233% ( 4) 00:14:55.592 12795.124 - 12857.539: 77.5621% ( 4) 00:14:55.592 12857.539 - 12919.954: 77.6009% ( 4) 00:14:55.592 12919.954 - 12982.370: 77.6495% ( 5) 00:14:55.592 12982.370 - 13044.785: 77.7077% ( 6) 00:14:55.592 13044.785 - 13107.200: 77.7659% ( 6) 00:14:55.592 13107.200 - 13169.615: 77.8144% ( 5) 00:14:55.592 13169.615 - 13232.030: 77.8824% ( 7) 00:14:55.592 13232.030 - 13294.446: 77.9212% ( 4) 00:14:55.592 13294.446 - 13356.861: 77.9794% ( 6) 00:14:55.592 13356.861 - 13419.276: 78.0474% ( 7) 00:14:55.592 13419.276 - 13481.691: 78.0765% ( 3) 00:14:55.592 13481.691 - 13544.107: 78.1250% ( 5) 00:14:55.592 13544.107 - 13606.522: 78.1638% ( 4) 00:14:55.592 13606.522 - 13668.937: 78.1929% ( 3) 00:14:55.592 13668.937 - 13731.352: 78.2318% ( 4) 00:14:55.592 13731.352 - 13793.768: 78.2706% ( 4) 00:14:55.592 13793.768 - 13856.183: 78.2900% ( 2) 00:14:55.592 13856.183 - 13918.598: 78.3288% ( 4) 00:14:55.592 13918.598 - 13981.013: 78.3773% ( 5) 00:14:55.592 13981.013 - 14043.429: 78.4161% ( 4) 00:14:55.592 14043.429 - 14105.844: 78.4550% ( 4) 00:14:55.592 14105.844 - 14168.259: 78.5035% ( 5) 00:14:55.592 14168.259 - 14230.674: 78.5617% ( 6) 00:14:55.592 14230.674 - 14293.090: 78.6200% ( 6) 00:14:55.592 14293.090 - 14355.505: 78.6782% ( 6) 00:14:55.592 14355.505 - 14417.920: 78.7267% ( 5) 00:14:55.592 14417.920 - 14480.335: 78.7752% ( 5) 00:14:55.592 14480.335 - 14542.750: 78.8238% ( 5) 00:14:55.592 14542.750 - 14605.166: 78.8820% ( 6) 00:14:55.592 14605.166 - 14667.581: 78.9014% ( 2) 00:14:55.592 14667.581 - 14729.996: 78.9499% ( 5) 00:14:55.592 14729.996 - 14792.411: 79.0082% ( 6) 00:14:55.592 14792.411 - 14854.827: 79.0567% ( 5) 00:14:55.592 14854.827 - 14917.242: 79.1149% ( 6) 00:14:55.592 14917.242 - 14979.657: 79.1731% ( 6) 00:14:55.592 14979.657 - 15042.072: 79.2120% ( 4) 00:14:55.592 15042.072 - 15104.488: 79.2508% ( 4) 00:14:55.592 15104.488 - 15166.903: 79.2799% ( 3) 00:14:55.592 15166.903 - 15229.318: 79.3187% ( 4) 00:14:55.592 15229.318 - 15291.733: 79.3478% ( 3) 00:14:55.592 15291.733 - 15354.149: 79.3769% ( 3) 00:14:55.592 15354.149 - 15416.564: 79.4061% ( 3) 00:14:55.592 15416.564 - 15478.979: 79.4449% ( 4) 00:14:55.592 15478.979 - 15541.394: 79.4837% ( 4) 00:14:55.592 15541.394 - 15603.810: 79.5128% ( 3) 00:14:55.592 15603.810 - 15666.225: 79.5516% ( 4) 00:14:55.592 15666.225 - 15728.640: 79.5905% ( 4) 00:14:55.593 15728.640 - 15791.055: 79.6196% ( 3) 00:14:55.593 15791.055 - 15853.470: 79.6584% ( 4) 00:14:55.593 15853.470 - 15915.886: 79.6875% ( 3) 00:14:55.593 15915.886 - 15978.301: 79.7457% ( 6) 00:14:55.593 15978.301 - 16103.131: 79.8428% ( 10) 00:14:55.593 16103.131 - 16227.962: 79.9398% ( 10) 00:14:55.593 16227.962 - 16352.792: 80.0951% ( 16) 00:14:55.593 16352.792 - 16477.623: 80.2407% ( 15) 00:14:55.593 16477.623 - 16602.453: 80.3571% ( 12) 00:14:55.593 16602.453 - 16727.284: 80.4736% ( 12) 00:14:55.593 16727.284 - 16852.114: 80.5707% ( 10) 00:14:55.593 16852.114 - 16976.945: 80.6386% ( 7) 00:14:55.593 16976.945 - 17101.775: 80.7162% ( 8) 00:14:55.593 17101.775 - 17226.606: 80.7939% ( 8) 00:14:55.593 17226.606 - 17351.436: 80.8909% ( 10) 00:14:55.593 17351.436 - 17476.267: 80.9977% ( 11) 00:14:55.593 17476.267 - 17601.097: 81.1044% ( 11) 00:14:55.593 17601.097 - 17725.928: 81.2403% ( 14) 00:14:55.593 17725.928 - 17850.758: 81.3665% ( 13) 00:14:55.593 17850.758 - 17975.589: 81.4829% ( 12) 00:14:55.593 17975.589 - 18100.419: 81.5994% ( 12) 00:14:55.593 18100.419 - 18225.250: 81.7255% ( 13) 00:14:55.593 18225.250 - 18350.080: 81.8420% ( 12) 00:14:55.593 18350.080 - 18474.910: 81.9682% ( 13) 00:14:55.593 18474.910 - 18599.741: 82.0652% ( 10) 00:14:55.593 18599.741 - 18724.571: 82.1720% ( 11) 00:14:55.593 18724.571 - 18849.402: 82.3273% ( 16) 00:14:55.593 18849.402 - 18974.232: 82.5214% ( 20) 00:14:55.593 18974.232 - 19099.063: 82.8513% ( 34) 00:14:55.593 19099.063 - 19223.893: 83.3948% ( 56) 00:14:55.593 19223.893 - 19348.724: 83.8995% ( 52) 00:14:55.593 19348.724 - 19473.554: 84.4526% ( 57) 00:14:55.593 19473.554 - 19598.385: 85.1320% ( 70) 00:14:55.593 19598.385 - 19723.215: 85.7434% ( 63) 00:14:55.593 19723.215 - 19848.046: 86.3645% ( 64) 00:14:55.593 19848.046 - 19972.876: 87.0050% ( 66) 00:14:55.593 19972.876 - 20097.707: 87.6844% ( 70) 00:14:55.593 20097.707 - 20222.537: 88.3249% ( 66) 00:14:55.593 20222.537 - 20347.368: 89.0334% ( 73) 00:14:55.593 20347.368 - 20472.198: 89.7224% ( 71) 00:14:55.593 20472.198 - 20597.029: 90.3921% ( 69) 00:14:55.593 20597.029 - 20721.859: 91.1005% ( 73) 00:14:55.593 20721.859 - 20846.690: 91.7896% ( 71) 00:14:55.593 20846.690 - 20971.520: 92.4689% ( 70) 00:14:55.593 20971.520 - 21096.350: 93.0901% ( 64) 00:14:55.593 21096.350 - 21221.181: 93.7500% ( 68) 00:14:55.593 21221.181 - 21346.011: 94.4099% ( 68) 00:14:55.593 21346.011 - 21470.842: 95.0505% ( 66) 00:14:55.593 21470.842 - 21595.672: 95.6328% ( 60) 00:14:55.593 21595.672 - 21720.503: 96.2345% ( 62) 00:14:55.593 21720.503 - 21845.333: 96.8071% ( 59) 00:14:55.593 21845.333 - 21970.164: 97.3602% ( 57) 00:14:55.593 21970.164 - 22094.994: 97.9037% ( 56) 00:14:55.593 22094.994 - 22219.825: 98.4278% ( 54) 00:14:55.593 22219.825 - 22344.655: 98.7772% ( 36) 00:14:55.593 22344.655 - 22469.486: 98.9616% ( 19) 00:14:55.593 22469.486 - 22594.316: 99.1168% ( 16) 00:14:55.593 22594.316 - 22719.147: 99.2236% ( 11) 00:14:55.593 22719.147 - 22843.977: 99.2721% ( 5) 00:14:55.593 22843.977 - 22968.808: 99.2915% ( 2) 00:14:55.593 22968.808 - 23093.638: 99.3109% ( 2) 00:14:55.593 23093.638 - 23218.469: 99.3401% ( 3) 00:14:55.593 23218.469 - 23343.299: 99.3595% ( 2) 00:14:55.593 23343.299 - 23468.130: 99.3789% ( 2) 00:14:55.593 26464.061 - 26588.891: 99.3886% ( 1) 00:14:55.593 26588.891 - 26713.722: 99.4080% ( 2) 00:14:55.593 26713.722 - 26838.552: 99.4371% ( 3) 00:14:55.593 26838.552 - 26963.383: 99.4759% ( 4) 00:14:55.593 26963.383 - 27088.213: 99.5050% ( 3) 00:14:55.593 27088.213 - 27213.044: 99.5342% ( 3) 00:14:55.593 27213.044 - 27337.874: 99.5730% ( 4) 00:14:55.593 27337.874 - 27462.705: 99.6021% ( 3) 00:14:55.593 27462.705 - 27587.535: 99.6409% ( 4) 00:14:55.593 27587.535 - 27712.366: 99.6603% ( 2) 00:14:55.593 27712.366 - 27837.196: 99.6991% ( 4) 00:14:55.593 27837.196 - 27962.027: 99.7380% ( 4) 00:14:55.593 27962.027 - 28086.857: 99.7671% ( 3) 00:14:55.593 28086.857 - 28211.688: 99.8059% ( 4) 00:14:55.593 28211.688 - 28336.518: 99.8350% ( 3) 00:14:55.593 28336.518 - 28461.349: 99.8738% ( 4) 00:14:55.593 28461.349 - 28586.179: 99.9030% ( 3) 00:14:55.593 28586.179 - 28711.010: 99.9418% ( 4) 00:14:55.593 28711.010 - 28835.840: 99.9709% ( 3) 00:14:55.593 28835.840 - 28960.670: 100.0000% ( 3) 00:14:55.593 00:14:55.593 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:55.593 ============================================================================== 00:14:55.593 Range in us Cumulative IO count 00:14:55.593 8488.472 - 8550.888: 0.0291% ( 3) 00:14:55.593 8550.888 - 8613.303: 0.1165% ( 9) 00:14:55.593 8613.303 - 8675.718: 0.2911% ( 18) 00:14:55.593 8675.718 - 8738.133: 0.8346% ( 56) 00:14:55.593 8738.133 - 8800.549: 1.5625% ( 75) 00:14:55.593 8800.549 - 8862.964: 2.5524% ( 102) 00:14:55.593 8862.964 - 8925.379: 3.4647% ( 94) 00:14:55.593 8925.379 - 8987.794: 4.5613% ( 113) 00:14:55.593 8987.794 - 9050.210: 5.7356% ( 121) 00:14:55.593 9050.210 - 9112.625: 7.0749% ( 138) 00:14:55.593 9112.625 - 9175.040: 8.4530% ( 142) 00:14:55.593 9175.040 - 9237.455: 9.8894% ( 148) 00:14:55.593 9237.455 - 9299.870: 11.4227% ( 158) 00:14:55.593 9299.870 - 9362.286: 12.8591% ( 148) 00:14:55.593 9362.286 - 9424.701: 14.4119% ( 160) 00:14:55.593 9424.701 - 9487.116: 16.0229% ( 166) 00:14:55.593 9487.116 - 9549.531: 17.5563% ( 158) 00:14:55.593 9549.531 - 9611.947: 19.2158% ( 171) 00:14:55.593 9611.947 - 9674.362: 20.8172% ( 165) 00:14:55.593 9674.362 - 9736.777: 22.4476% ( 168) 00:14:55.593 9736.777 - 9799.192: 24.2818% ( 189) 00:14:55.593 9799.192 - 9861.608: 26.0190% ( 179) 00:14:55.593 9861.608 - 9924.023: 27.7950% ( 183) 00:14:55.593 9924.023 - 9986.438: 29.5322% ( 179) 00:14:55.593 9986.438 - 10048.853: 31.5994% ( 213) 00:14:55.593 10048.853 - 10111.269: 33.6374% ( 210) 00:14:55.593 10111.269 - 10173.684: 35.7919% ( 222) 00:14:55.593 10173.684 - 10236.099: 37.9367% ( 221) 00:14:55.593 10236.099 - 10298.514: 40.2077% ( 234) 00:14:55.593 10298.514 - 10360.930: 42.4010% ( 226) 00:14:55.593 10360.930 - 10423.345: 44.4293% ( 209) 00:14:55.593 10423.345 - 10485.760: 46.4577% ( 209) 00:14:55.593 10485.760 - 10548.175: 48.6510% ( 226) 00:14:55.593 10548.175 - 10610.590: 50.6988% ( 211) 00:14:55.593 10610.590 - 10673.006: 52.9891% ( 236) 00:14:55.593 10673.006 - 10735.421: 55.1242% ( 220) 00:14:55.593 10735.421 - 10797.836: 57.2690% ( 221) 00:14:55.593 10797.836 - 10860.251: 59.2682% ( 206) 00:14:55.593 10860.251 - 10922.667: 60.9957% ( 178) 00:14:55.593 10922.667 - 10985.082: 62.4127% ( 146) 00:14:55.593 10985.082 - 11047.497: 63.7908% ( 142) 00:14:55.593 11047.497 - 11109.912: 65.0136% ( 126) 00:14:55.593 11109.912 - 11172.328: 65.9938% ( 101) 00:14:55.593 11172.328 - 11234.743: 66.9546% ( 99) 00:14:55.593 11234.743 - 11297.158: 67.7601% ( 83) 00:14:55.593 11297.158 - 11359.573: 68.6335% ( 90) 00:14:55.593 11359.573 - 11421.989: 69.3711% ( 76) 00:14:55.593 11421.989 - 11484.404: 70.2155% ( 87) 00:14:55.593 11484.404 - 11546.819: 71.0598% ( 87) 00:14:55.593 11546.819 - 11609.234: 71.9235% ( 89) 00:14:55.593 11609.234 - 11671.650: 72.6320% ( 73) 00:14:55.593 11671.650 - 11734.065: 73.3405% ( 73) 00:14:55.593 11734.065 - 11796.480: 73.9325% ( 61) 00:14:55.593 11796.480 - 11858.895: 74.4274% ( 51) 00:14:55.593 11858.895 - 11921.310: 74.7962% ( 38) 00:14:55.593 11921.310 - 11983.726: 75.1844% ( 40) 00:14:55.593 11983.726 - 12046.141: 75.5435% ( 37) 00:14:55.593 12046.141 - 12108.556: 75.8637% ( 33) 00:14:55.593 12108.556 - 12170.971: 76.1355% ( 28) 00:14:55.593 12170.971 - 12233.387: 76.3102% ( 18) 00:14:55.593 12233.387 - 12295.802: 76.4849% ( 18) 00:14:55.593 12295.802 - 12358.217: 76.6595% ( 18) 00:14:55.593 12358.217 - 12420.632: 76.8148% ( 16) 00:14:55.593 12420.632 - 12483.048: 76.9701% ( 16) 00:14:55.593 12483.048 - 12545.463: 77.1254% ( 16) 00:14:55.593 12545.463 - 12607.878: 77.2321% ( 11) 00:14:55.593 12607.878 - 12670.293: 77.2613% ( 3) 00:14:55.593 12670.293 - 12732.709: 77.2904% ( 3) 00:14:55.593 12732.709 - 12795.124: 77.3001% ( 1) 00:14:55.593 12795.124 - 12857.539: 77.3195% ( 2) 00:14:55.593 12857.539 - 12919.954: 77.3389% ( 2) 00:14:55.593 12919.954 - 12982.370: 77.3583% ( 2) 00:14:55.594 12982.370 - 13044.785: 77.3777% ( 2) 00:14:55.594 13044.785 - 13107.200: 77.3874% ( 1) 00:14:55.594 13107.200 - 13169.615: 77.4068% ( 2) 00:14:55.594 13169.615 - 13232.030: 77.4262% ( 2) 00:14:55.594 13232.030 - 13294.446: 77.4457% ( 2) 00:14:55.594 13294.446 - 13356.861: 77.4651% ( 2) 00:14:55.594 13356.861 - 13419.276: 77.4845% ( 2) 00:14:55.594 13419.276 - 13481.691: 77.5233% ( 4) 00:14:55.594 13481.691 - 13544.107: 77.5815% ( 6) 00:14:55.594 13544.107 - 13606.522: 77.6592% ( 8) 00:14:55.594 13606.522 - 13668.937: 77.7368% ( 8) 00:14:55.594 13668.937 - 13731.352: 77.8047% ( 7) 00:14:55.594 13731.352 - 13793.768: 77.8727% ( 7) 00:14:55.594 13793.768 - 13856.183: 77.9600% ( 9) 00:14:55.594 13856.183 - 13918.598: 78.0571% ( 10) 00:14:55.594 13918.598 - 13981.013: 78.1444% ( 9) 00:14:55.594 13981.013 - 14043.429: 78.2220% ( 8) 00:14:55.594 14043.429 - 14105.844: 78.2997% ( 8) 00:14:55.594 14105.844 - 14168.259: 78.3676% ( 7) 00:14:55.594 14168.259 - 14230.674: 78.4259% ( 6) 00:14:55.594 14230.674 - 14293.090: 78.5035% ( 8) 00:14:55.594 14293.090 - 14355.505: 78.5714% ( 7) 00:14:55.594 14355.505 - 14417.920: 78.6394% ( 7) 00:14:55.594 14417.920 - 14480.335: 78.7170% ( 8) 00:14:55.594 14480.335 - 14542.750: 78.7849% ( 7) 00:14:55.594 14542.750 - 14605.166: 78.8626% ( 8) 00:14:55.594 14605.166 - 14667.581: 78.9305% ( 7) 00:14:55.594 14667.581 - 14729.996: 78.9984% ( 7) 00:14:55.594 14729.996 - 14792.411: 79.0567% ( 6) 00:14:55.594 14792.411 - 14854.827: 79.1246% ( 7) 00:14:55.594 14854.827 - 14917.242: 79.2023% ( 8) 00:14:55.594 14917.242 - 14979.657: 79.2799% ( 8) 00:14:55.594 14979.657 - 15042.072: 79.3478% ( 7) 00:14:55.594 15042.072 - 15104.488: 79.4255% ( 8) 00:14:55.594 15104.488 - 15166.903: 79.5031% ( 8) 00:14:55.594 15166.903 - 15229.318: 79.5710% ( 7) 00:14:55.594 15229.318 - 15291.733: 79.6487% ( 8) 00:14:55.594 15291.733 - 15354.149: 79.7166% ( 7) 00:14:55.594 15354.149 - 15416.564: 79.8040% ( 9) 00:14:55.594 15416.564 - 15478.979: 79.8622% ( 6) 00:14:55.594 15478.979 - 15541.394: 79.9204% ( 6) 00:14:55.594 15541.394 - 15603.810: 79.9786% ( 6) 00:14:55.594 15603.810 - 15666.225: 80.0078% ( 3) 00:14:55.594 15666.225 - 15728.640: 80.0369% ( 3) 00:14:55.594 15728.640 - 15791.055: 80.0757% ( 4) 00:14:55.594 15791.055 - 15853.470: 80.1048% ( 3) 00:14:55.594 15853.470 - 15915.886: 80.1242% ( 2) 00:14:55.594 16602.453 - 16727.284: 80.1339% ( 1) 00:14:55.594 16727.284 - 16852.114: 80.1727% ( 4) 00:14:55.594 16852.114 - 16976.945: 80.2310% ( 6) 00:14:55.594 16976.945 - 17101.775: 80.3086% ( 8) 00:14:55.594 17101.775 - 17226.606: 80.3766% ( 7) 00:14:55.594 17226.606 - 17351.436: 80.4542% ( 8) 00:14:55.594 17351.436 - 17476.267: 80.5415% ( 9) 00:14:55.594 17476.267 - 17601.097: 80.6386% ( 10) 00:14:55.594 17601.097 - 17725.928: 80.7356% ( 10) 00:14:55.594 17725.928 - 17850.758: 80.8618% ( 13) 00:14:55.594 17850.758 - 17975.589: 80.9783% ( 12) 00:14:55.594 17975.589 - 18100.419: 81.1335% ( 16) 00:14:55.594 18100.419 - 18225.250: 81.3470% ( 22) 00:14:55.594 18225.250 - 18350.080: 81.5509% ( 21) 00:14:55.594 18350.080 - 18474.910: 81.7935% ( 25) 00:14:55.594 18474.910 - 18599.741: 82.0361% ( 25) 00:14:55.594 18599.741 - 18724.571: 82.2787% ( 25) 00:14:55.594 18724.571 - 18849.402: 82.5796% ( 31) 00:14:55.594 18849.402 - 18974.232: 82.8513% ( 28) 00:14:55.594 18974.232 - 19099.063: 83.1813% ( 34) 00:14:55.594 19099.063 - 19223.893: 83.7539% ( 59) 00:14:55.594 19223.893 - 19348.724: 84.3168% ( 58) 00:14:55.594 19348.724 - 19473.554: 84.9282% ( 63) 00:14:55.594 19473.554 - 19598.385: 85.6269% ( 72) 00:14:55.594 19598.385 - 19723.215: 86.4422% ( 84) 00:14:55.594 19723.215 - 19848.046: 87.1506% ( 73) 00:14:55.594 19848.046 - 19972.876: 87.8688% ( 74) 00:14:55.594 19972.876 - 20097.707: 88.5675% ( 72) 00:14:55.594 20097.707 - 20222.537: 89.2275% ( 68) 00:14:55.594 20222.537 - 20347.368: 89.8777% ( 67) 00:14:55.594 20347.368 - 20472.198: 90.5182% ( 66) 00:14:55.594 20472.198 - 20597.029: 91.1879% ( 69) 00:14:55.594 20597.029 - 20721.859: 91.8672% ( 70) 00:14:55.594 20721.859 - 20846.690: 92.4592% ( 61) 00:14:55.594 20846.690 - 20971.520: 93.0804% ( 64) 00:14:55.594 20971.520 - 21096.350: 93.6141% ( 55) 00:14:55.594 21096.350 - 21221.181: 94.1964% ( 60) 00:14:55.594 21221.181 - 21346.011: 94.7108% ( 53) 00:14:55.594 21346.011 - 21470.842: 95.2640% ( 57) 00:14:55.594 21470.842 - 21595.672: 95.8172% ( 57) 00:14:55.594 21595.672 - 21720.503: 96.3898% ( 59) 00:14:55.594 21720.503 - 21845.333: 96.9138% ( 54) 00:14:55.594 21845.333 - 21970.164: 97.4379% ( 54) 00:14:55.594 21970.164 - 22094.994: 97.9523% ( 53) 00:14:55.594 22094.994 - 22219.825: 98.4278% ( 49) 00:14:55.594 22219.825 - 22344.655: 98.7966% ( 38) 00:14:55.594 22344.655 - 22469.486: 98.9616% ( 17) 00:14:55.594 22469.486 - 22594.316: 99.0974% ( 14) 00:14:55.594 22594.316 - 22719.147: 99.2042% ( 11) 00:14:55.594 22719.147 - 22843.977: 99.2624% ( 6) 00:14:55.594 22843.977 - 22968.808: 99.2818% ( 2) 00:14:55.594 22968.808 - 23093.638: 99.3109% ( 3) 00:14:55.594 23093.638 - 23218.469: 99.3304% ( 2) 00:14:55.594 23218.469 - 23343.299: 99.3595% ( 3) 00:14:55.594 23343.299 - 23468.130: 99.3789% ( 2) 00:14:55.594 23967.451 - 24092.282: 99.3886% ( 1) 00:14:55.594 24092.282 - 24217.112: 99.4177% ( 3) 00:14:55.594 24217.112 - 24341.943: 99.4468% ( 3) 00:14:55.594 24341.943 - 24466.773: 99.4856% ( 4) 00:14:55.594 24466.773 - 24591.604: 99.5148% ( 3) 00:14:55.594 24591.604 - 24716.434: 99.5439% ( 3) 00:14:55.594 24716.434 - 24841.265: 99.5730% ( 3) 00:14:55.594 24841.265 - 24966.095: 99.6118% ( 4) 00:14:55.594 24966.095 - 25090.926: 99.6409% ( 3) 00:14:55.594 25090.926 - 25215.756: 99.6700% ( 3) 00:14:55.594 25215.756 - 25340.587: 99.6991% ( 3) 00:14:55.594 25340.587 - 25465.417: 99.7283% ( 3) 00:14:55.594 25465.417 - 25590.248: 99.7671% ( 4) 00:14:55.594 25590.248 - 25715.078: 99.8059% ( 4) 00:14:55.594 25715.078 - 25839.909: 99.8350% ( 3) 00:14:55.594 25839.909 - 25964.739: 99.8738% ( 4) 00:14:55.594 25964.739 - 26089.570: 99.9030% ( 3) 00:14:55.594 26089.570 - 26214.400: 99.9418% ( 4) 00:14:55.594 26214.400 - 26339.230: 99.9709% ( 3) 00:14:55.594 26339.230 - 26464.061: 100.0000% ( 3) 00:14:55.594 00:14:55.594 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:55.594 ============================================================================== 00:14:55.594 Range in us Cumulative IO count 00:14:55.594 8488.472 - 8550.888: 0.0194% ( 2) 00:14:55.594 8550.888 - 8613.303: 0.1165% ( 10) 00:14:55.594 8613.303 - 8675.718: 0.3882% ( 28) 00:14:55.594 8675.718 - 8738.133: 0.9123% ( 54) 00:14:55.594 8738.133 - 8800.549: 1.6790% ( 79) 00:14:55.594 8800.549 - 8862.964: 2.5136% ( 86) 00:14:55.594 8862.964 - 8925.379: 3.4453% ( 96) 00:14:55.594 8925.379 - 8987.794: 4.5128% ( 110) 00:14:55.594 8987.794 - 9050.210: 5.7356% ( 126) 00:14:55.594 9050.210 - 9112.625: 6.9876% ( 129) 00:14:55.594 9112.625 - 9175.040: 8.2880% ( 134) 00:14:55.594 9175.040 - 9237.455: 9.7535% ( 151) 00:14:55.594 9237.455 - 9299.870: 11.1898% ( 148) 00:14:55.594 9299.870 - 9362.286: 12.7038% ( 156) 00:14:55.594 9362.286 - 9424.701: 14.1693% ( 151) 00:14:55.594 9424.701 - 9487.116: 15.7706% ( 165) 00:14:55.594 9487.116 - 9549.531: 17.4689% ( 175) 00:14:55.594 9549.531 - 9611.947: 19.2352% ( 182) 00:14:55.594 9611.947 - 9674.362: 20.8075% ( 162) 00:14:55.594 9674.362 - 9736.777: 22.4864% ( 173) 00:14:55.594 9736.777 - 9799.192: 24.2139% ( 178) 00:14:55.594 9799.192 - 9861.608: 25.9220% ( 176) 00:14:55.594 9861.608 - 9924.023: 27.7271% ( 186) 00:14:55.594 9924.023 - 9986.438: 29.4934% ( 182) 00:14:55.594 9986.438 - 10048.853: 31.4635% ( 203) 00:14:55.594 10048.853 - 10111.269: 33.4142% ( 201) 00:14:55.594 10111.269 - 10173.684: 35.6269% ( 228) 00:14:55.594 10173.684 - 10236.099: 37.7523% ( 219) 00:14:55.594 10236.099 - 10298.514: 39.8874% ( 220) 00:14:55.594 10298.514 - 10360.930: 41.9546% ( 213) 00:14:55.594 10360.930 - 10423.345: 44.0606% ( 217) 00:14:55.594 10423.345 - 10485.760: 46.1665% ( 217) 00:14:55.594 10485.760 - 10548.175: 48.3210% ( 222) 00:14:55.594 10548.175 - 10610.590: 50.5047% ( 225) 00:14:55.594 10610.590 - 10673.006: 52.6398% ( 220) 00:14:55.594 10673.006 - 10735.421: 54.9204% ( 235) 00:14:55.594 10735.421 - 10797.836: 57.1137% ( 226) 00:14:55.594 10797.836 - 10860.251: 59.1033% ( 205) 00:14:55.594 10860.251 - 10922.667: 60.7337% ( 168) 00:14:55.594 10922.667 - 10985.082: 62.2380% ( 155) 00:14:55.594 10985.082 - 11047.497: 63.6258% ( 143) 00:14:55.594 11047.497 - 11109.912: 64.7418% ( 115) 00:14:55.594 11109.912 - 11172.328: 65.6735% ( 96) 00:14:55.594 11172.328 - 11234.743: 66.6052% ( 96) 00:14:55.594 11234.743 - 11297.158: 67.5175% ( 94) 00:14:55.595 11297.158 - 11359.573: 68.4103% ( 92) 00:14:55.595 11359.573 - 11421.989: 69.2547% ( 87) 00:14:55.595 11421.989 - 11484.404: 70.1572% ( 93) 00:14:55.595 11484.404 - 11546.819: 70.9530% ( 82) 00:14:55.595 11546.819 - 11609.234: 71.7974% ( 87) 00:14:55.595 11609.234 - 11671.650: 72.5932% ( 82) 00:14:55.595 11671.650 - 11734.065: 73.2822% ( 71) 00:14:55.595 11734.065 - 11796.480: 73.9033% ( 64) 00:14:55.595 11796.480 - 11858.895: 74.4371% ( 55) 00:14:55.595 11858.895 - 11921.310: 74.8350% ( 41) 00:14:55.595 11921.310 - 11983.726: 75.2038% ( 38) 00:14:55.595 11983.726 - 12046.141: 75.5241% ( 33) 00:14:55.595 12046.141 - 12108.556: 75.8540% ( 34) 00:14:55.595 12108.556 - 12170.971: 76.0870% ( 24) 00:14:55.595 12170.971 - 12233.387: 76.2714% ( 19) 00:14:55.595 12233.387 - 12295.802: 76.4072% ( 14) 00:14:55.595 12295.802 - 12358.217: 76.5528% ( 15) 00:14:55.595 12358.217 - 12420.632: 76.6595% ( 11) 00:14:55.595 12420.632 - 12483.048: 76.8051% ( 15) 00:14:55.595 12483.048 - 12545.463: 76.9119% ( 11) 00:14:55.595 12545.463 - 12607.878: 77.0186% ( 11) 00:14:55.595 12607.878 - 12670.293: 77.0672% ( 5) 00:14:55.595 12670.293 - 12732.709: 77.1157% ( 5) 00:14:55.595 12732.709 - 12795.124: 77.1448% ( 3) 00:14:55.595 12795.124 - 12857.539: 77.1739% ( 3) 00:14:55.595 12857.539 - 12919.954: 77.2224% ( 5) 00:14:55.595 12919.954 - 12982.370: 77.2904% ( 7) 00:14:55.595 12982.370 - 13044.785: 77.3486% ( 6) 00:14:55.595 13044.785 - 13107.200: 77.3971% ( 5) 00:14:55.595 13107.200 - 13169.615: 77.4457% ( 5) 00:14:55.595 13169.615 - 13232.030: 77.4942% ( 5) 00:14:55.595 13232.030 - 13294.446: 77.5621% ( 7) 00:14:55.595 13294.446 - 13356.861: 77.6009% ( 4) 00:14:55.595 13356.861 - 13419.276: 77.6592% ( 6) 00:14:55.595 13419.276 - 13481.691: 77.6980% ( 4) 00:14:55.595 13481.691 - 13544.107: 77.7465% ( 5) 00:14:55.595 13544.107 - 13606.522: 77.7950% ( 5) 00:14:55.595 13606.522 - 13668.937: 77.8533% ( 6) 00:14:55.595 13668.937 - 13731.352: 77.9018% ( 5) 00:14:55.595 13731.352 - 13793.768: 77.9503% ( 5) 00:14:55.595 13793.768 - 13856.183: 77.9891% ( 4) 00:14:55.595 13856.183 - 13918.598: 78.0474% ( 6) 00:14:55.595 13918.598 - 13981.013: 78.0959% ( 5) 00:14:55.595 13981.013 - 14043.429: 78.1444% ( 5) 00:14:55.595 14043.429 - 14105.844: 78.1929% ( 5) 00:14:55.595 14105.844 - 14168.259: 78.2415% ( 5) 00:14:55.595 14168.259 - 14230.674: 78.3191% ( 8) 00:14:55.595 14230.674 - 14293.090: 78.4064% ( 9) 00:14:55.595 14293.090 - 14355.505: 78.4744% ( 7) 00:14:55.595 14355.505 - 14417.920: 78.5811% ( 11) 00:14:55.595 14417.920 - 14480.335: 78.6685% ( 9) 00:14:55.595 14480.335 - 14542.750: 78.7655% ( 10) 00:14:55.595 14542.750 - 14605.166: 78.8432% ( 8) 00:14:55.595 14605.166 - 14667.581: 78.9208% ( 8) 00:14:55.595 14667.581 - 14729.996: 78.9887% ( 7) 00:14:55.595 14729.996 - 14792.411: 79.0470% ( 6) 00:14:55.595 14792.411 - 14854.827: 79.1052% ( 6) 00:14:55.595 14854.827 - 14917.242: 79.1925% ( 9) 00:14:55.595 14917.242 - 14979.657: 79.2605% ( 7) 00:14:55.595 14979.657 - 15042.072: 79.3284% ( 7) 00:14:55.595 15042.072 - 15104.488: 79.3866% ( 6) 00:14:55.595 15104.488 - 15166.903: 79.4643% ( 8) 00:14:55.595 15166.903 - 15229.318: 79.5419% ( 8) 00:14:55.595 15229.318 - 15291.733: 79.6099% ( 7) 00:14:55.595 15291.733 - 15354.149: 79.6681% ( 6) 00:14:55.595 15354.149 - 15416.564: 79.7166% ( 5) 00:14:55.595 15416.564 - 15478.979: 79.7748% ( 6) 00:14:55.595 15478.979 - 15541.394: 79.8234% ( 5) 00:14:55.595 15541.394 - 15603.810: 79.8816% ( 6) 00:14:55.595 15603.810 - 15666.225: 79.9301% ( 5) 00:14:55.595 15666.225 - 15728.640: 79.9981% ( 7) 00:14:55.595 15728.640 - 15791.055: 80.0369% ( 4) 00:14:55.595 15791.055 - 15853.470: 80.0951% ( 6) 00:14:55.595 15853.470 - 15915.886: 80.1436% ( 5) 00:14:55.595 15915.886 - 15978.301: 80.1922% ( 5) 00:14:55.595 15978.301 - 16103.131: 80.3086% ( 12) 00:14:55.595 16103.131 - 16227.962: 80.4057% ( 10) 00:14:55.595 16227.962 - 16352.792: 80.5027% ( 10) 00:14:55.595 16352.792 - 16477.623: 80.5318% ( 3) 00:14:55.595 16477.623 - 16602.453: 80.5707% ( 4) 00:14:55.595 16602.453 - 16727.284: 80.5998% ( 3) 00:14:55.595 16727.284 - 16852.114: 80.6386% ( 4) 00:14:55.595 16852.114 - 16976.945: 80.6774% ( 4) 00:14:55.595 16976.945 - 17101.775: 80.7065% ( 3) 00:14:55.595 17101.775 - 17226.606: 80.7356% ( 3) 00:14:55.595 17226.606 - 17351.436: 80.7453% ( 1) 00:14:55.595 17476.267 - 17601.097: 80.8133% ( 7) 00:14:55.595 17601.097 - 17725.928: 80.9394% ( 13) 00:14:55.595 17725.928 - 17850.758: 81.0365% ( 10) 00:14:55.595 17850.758 - 17975.589: 81.2112% ( 18) 00:14:55.595 17975.589 - 18100.419: 81.4150% ( 21) 00:14:55.595 18100.419 - 18225.250: 81.5897% ( 18) 00:14:55.595 18225.250 - 18350.080: 81.8032% ( 22) 00:14:55.595 18350.080 - 18474.910: 82.0264% ( 23) 00:14:55.595 18474.910 - 18599.741: 82.2205% ( 20) 00:14:55.595 18599.741 - 18724.571: 82.4922% ( 28) 00:14:55.595 18724.571 - 18849.402: 82.7252% ( 24) 00:14:55.595 18849.402 - 18974.232: 83.0066% ( 29) 00:14:55.595 18974.232 - 19099.063: 83.4045% ( 41) 00:14:55.595 19099.063 - 19223.893: 83.9771% ( 59) 00:14:55.595 19223.893 - 19348.724: 84.5497% ( 59) 00:14:55.595 19348.724 - 19473.554: 85.2679% ( 74) 00:14:55.595 19473.554 - 19598.385: 86.0151% ( 77) 00:14:55.595 19598.385 - 19723.215: 86.7042% ( 71) 00:14:55.595 19723.215 - 19848.046: 87.3932% ( 71) 00:14:55.595 19848.046 - 19972.876: 88.0338% ( 66) 00:14:55.595 19972.876 - 20097.707: 88.6840% ( 67) 00:14:55.595 20097.707 - 20222.537: 89.3148% ( 65) 00:14:55.595 20222.537 - 20347.368: 89.9359% ( 64) 00:14:55.595 20347.368 - 20472.198: 90.5765% ( 66) 00:14:55.595 20472.198 - 20597.029: 91.1782% ( 62) 00:14:55.595 20597.029 - 20721.859: 91.8478% ( 69) 00:14:55.595 20721.859 - 20846.690: 92.4495% ( 62) 00:14:55.595 20846.690 - 20971.520: 93.0512% ( 62) 00:14:55.595 20971.520 - 21096.350: 93.6141% ( 58) 00:14:55.595 21096.350 - 21221.181: 94.2061% ( 61) 00:14:55.595 21221.181 - 21346.011: 94.7981% ( 61) 00:14:55.595 21346.011 - 21470.842: 95.3125% ( 53) 00:14:55.595 21470.842 - 21595.672: 95.8948% ( 60) 00:14:55.595 21595.672 - 21720.503: 96.4480% ( 57) 00:14:55.595 21720.503 - 21845.333: 96.9720% ( 54) 00:14:55.595 21845.333 - 21970.164: 97.5738% ( 62) 00:14:55.595 21970.164 - 22094.994: 98.1075% ( 55) 00:14:55.595 22094.994 - 22219.825: 98.6122% ( 52) 00:14:55.595 22219.825 - 22344.655: 99.0295% ( 43) 00:14:55.595 22344.655 - 22469.486: 99.2236% ( 20) 00:14:55.595 22469.486 - 22594.316: 99.3789% ( 16) 00:14:55.595 22594.316 - 22719.147: 99.5148% ( 14) 00:14:55.595 22719.147 - 22843.977: 99.6021% ( 9) 00:14:55.595 22843.977 - 22968.808: 99.6603% ( 6) 00:14:55.595 22968.808 - 23093.638: 99.7186% ( 6) 00:14:55.595 23093.638 - 23218.469: 99.7671% ( 5) 00:14:55.595 23218.469 - 23343.299: 99.8350% ( 7) 00:14:55.595 23343.299 - 23468.130: 99.8932% ( 6) 00:14:55.595 23468.130 - 23592.960: 99.9224% ( 3) 00:14:55.595 23592.960 - 23717.790: 99.9612% ( 4) 00:14:55.595 23717.790 - 23842.621: 99.9903% ( 3) 00:14:55.595 23842.621 - 23967.451: 100.0000% ( 1) 00:14:55.595 00:14:55.595 13:48:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:56.997 Initializing NVMe Controllers 00:14:56.997 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:56.997 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:56.997 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:56.997 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:56.997 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:56.997 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:56.997 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:56.997 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:56.997 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:56.997 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:56.997 Initialization complete. Launching workers. 00:14:56.997 ======================================================== 00:14:56.997 Latency(us) 00:14:56.997 Device Information : IOPS MiB/s Average min max 00:14:56.997 PCIE (0000:00:10.0) NSID 1 from core 0: 9585.00 112.32 13407.59 10555.37 44384.76 00:14:56.997 PCIE (0000:00:11.0) NSID 1 from core 0: 9585.00 112.32 13391.04 10711.69 42534.40 00:14:56.997 PCIE (0000:00:13.0) NSID 1 from core 0: 9585.00 112.32 13374.13 10919.23 41582.20 00:14:56.997 PCIE (0000:00:12.0) NSID 1 from core 0: 9585.00 112.32 13358.77 10916.64 39543.30 00:14:56.997 PCIE (0000:00:12.0) NSID 2 from core 0: 9585.00 112.32 13342.81 10859.35 37865.42 00:14:56.997 PCIE (0000:00:12.0) NSID 3 from core 0: 9648.90 113.07 13238.40 10698.79 29457.70 00:14:56.997 ======================================================== 00:14:56.997 Total : 57573.90 674.69 13352.00 10555.37 44384.76 00:14:56.997 00:14:56.997 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:56.997 ================================================================================= 00:14:56.997 1.00000% : 10922.667us 00:14:56.997 10.00000% : 11671.650us 00:14:56.997 25.00000% : 12170.971us 00:14:56.997 50.00000% : 12857.539us 00:14:56.997 75.00000% : 13731.352us 00:14:56.997 90.00000% : 15104.488us 00:14:56.997 95.00000% : 16227.962us 00:14:56.997 98.00000% : 17476.267us 00:14:56.998 99.00000% : 35701.516us 00:14:56.998 99.50000% : 42941.684us 00:14:56.998 99.90000% : 44189.989us 00:14:56.998 99.99000% : 44439.650us 00:14:56.998 99.99900% : 44439.650us 00:14:56.998 99.99990% : 44439.650us 00:14:56.998 99.99999% : 44439.650us 00:14:56.998 00:14:56.998 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:56.998 ================================================================================= 00:14:56.998 1.00000% : 11109.912us 00:14:56.998 10.00000% : 11671.650us 00:14:56.998 25.00000% : 12233.387us 00:14:56.998 50.00000% : 12919.954us 00:14:56.998 75.00000% : 13606.522us 00:14:56.998 90.00000% : 15166.903us 00:14:56.998 95.00000% : 16477.623us 00:14:56.998 98.00000% : 17226.606us 00:14:56.998 99.00000% : 33704.229us 00:14:56.998 99.50000% : 41194.057us 00:14:56.998 99.90000% : 42442.362us 00:14:56.998 99.99000% : 42692.023us 00:14:56.998 99.99900% : 42692.023us 00:14:56.998 99.99990% : 42692.023us 00:14:56.998 99.99999% : 42692.023us 00:14:56.998 00:14:56.998 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:56.998 ================================================================================= 00:14:56.998 1.00000% : 11234.743us 00:14:56.998 10.00000% : 11671.650us 00:14:56.998 25.00000% : 12170.971us 00:14:56.998 50.00000% : 12857.539us 00:14:56.998 75.00000% : 13606.522us 00:14:56.998 90.00000% : 15229.318us 00:14:56.998 95.00000% : 16602.453us 00:14:56.998 98.00000% : 17351.436us 00:14:56.998 99.00000% : 32955.246us 00:14:56.998 99.50000% : 40195.413us 00:14:56.998 99.90000% : 41443.718us 00:14:56.998 99.99000% : 41693.379us 00:14:56.998 99.99900% : 41693.379us 00:14:56.998 99.99990% : 41693.379us 00:14:56.998 99.99999% : 41693.379us 00:14:56.998 00:14:56.998 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:56.998 ================================================================================= 00:14:56.998 1.00000% : 11234.743us 00:14:56.998 10.00000% : 11671.650us 00:14:56.998 25.00000% : 12170.971us 00:14:56.998 50.00000% : 12857.539us 00:14:56.998 75.00000% : 13668.937us 00:14:56.998 90.00000% : 15104.488us 00:14:56.998 95.00000% : 16477.623us 00:14:56.998 98.00000% : 17725.928us 00:14:56.998 99.00000% : 30957.958us 00:14:56.998 99.50000% : 38198.126us 00:14:56.998 99.90000% : 39446.430us 00:14:56.998 99.99000% : 39696.091us 00:14:56.998 99.99900% : 39696.091us 00:14:56.998 99.99990% : 39696.091us 00:14:56.998 99.99999% : 39696.091us 00:14:56.998 00:14:56.998 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:56.998 ================================================================================= 00:14:56.998 1.00000% : 11109.912us 00:14:56.998 10.00000% : 11671.650us 00:14:56.998 25.00000% : 12233.387us 00:14:56.998 50.00000% : 12857.539us 00:14:56.998 75.00000% : 13668.937us 00:14:56.998 90.00000% : 15104.488us 00:14:56.998 95.00000% : 16352.792us 00:14:56.998 98.00000% : 17850.758us 00:14:56.998 99.00000% : 28835.840us 00:14:56.998 99.50000% : 36700.160us 00:14:56.998 99.90000% : 37698.804us 00:14:56.998 99.99000% : 37948.465us 00:14:56.998 99.99900% : 37948.465us 00:14:56.998 99.99990% : 37948.465us 00:14:56.998 99.99999% : 37948.465us 00:14:56.998 00:14:56.998 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:56.998 ================================================================================= 00:14:56.998 1.00000% : 11109.912us 00:14:56.998 10.00000% : 11671.650us 00:14:56.998 25.00000% : 12233.387us 00:14:56.998 50.00000% : 12857.539us 00:14:56.998 75.00000% : 13668.937us 00:14:56.998 90.00000% : 15166.903us 00:14:56.998 95.00000% : 16103.131us 00:14:56.998 98.00000% : 18100.419us 00:14:56.998 99.00000% : 20971.520us 00:14:56.998 99.50000% : 28086.857us 00:14:56.998 99.90000% : 29210.331us 00:14:56.998 99.99000% : 29459.992us 00:14:56.998 99.99900% : 29459.992us 00:14:56.998 99.99990% : 29459.992us 00:14:56.998 99.99999% : 29459.992us 00:14:56.998 00:14:56.998 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:56.998 ============================================================================== 00:14:56.998 Range in us Cumulative IO count 00:14:56.998 10548.175 - 10610.590: 0.0104% ( 1) 00:14:56.998 10610.590 - 10673.006: 0.0625% ( 5) 00:14:56.998 10673.006 - 10735.421: 0.2812% ( 21) 00:14:56.998 10735.421 - 10797.836: 0.5000% ( 21) 00:14:56.998 10797.836 - 10860.251: 0.8854% ( 37) 00:14:56.998 10860.251 - 10922.667: 1.1979% ( 30) 00:14:56.998 10922.667 - 10985.082: 1.6042% ( 39) 00:14:56.998 10985.082 - 11047.497: 2.0104% ( 39) 00:14:56.998 11047.497 - 11109.912: 2.6562% ( 62) 00:14:56.998 11109.912 - 11172.328: 3.5625% ( 87) 00:14:56.998 11172.328 - 11234.743: 4.2500% ( 66) 00:14:56.998 11234.743 - 11297.158: 5.0312% ( 75) 00:14:56.998 11297.158 - 11359.573: 5.7188% ( 66) 00:14:56.998 11359.573 - 11421.989: 6.6354% ( 88) 00:14:56.998 11421.989 - 11484.404: 7.7292% ( 105) 00:14:56.998 11484.404 - 11546.819: 8.6042% ( 84) 00:14:56.998 11546.819 - 11609.234: 9.9583% ( 130) 00:14:56.998 11609.234 - 11671.650: 11.2604% ( 125) 00:14:56.998 11671.650 - 11734.065: 12.4167% ( 111) 00:14:56.998 11734.065 - 11796.480: 13.8542% ( 138) 00:14:56.998 11796.480 - 11858.895: 15.5208% ( 160) 00:14:56.998 11858.895 - 11921.310: 17.0729% ( 149) 00:14:56.998 11921.310 - 11983.726: 18.8646% ( 172) 00:14:56.998 11983.726 - 12046.141: 21.0938% ( 214) 00:14:56.998 12046.141 - 12108.556: 23.1354% ( 196) 00:14:56.998 12108.556 - 12170.971: 25.2292% ( 201) 00:14:56.998 12170.971 - 12233.387: 27.4062% ( 209) 00:14:56.998 12233.387 - 12295.802: 29.3021% ( 182) 00:14:56.998 12295.802 - 12358.217: 31.3333% ( 195) 00:14:56.998 12358.217 - 12420.632: 33.2708% ( 186) 00:14:56.998 12420.632 - 12483.048: 35.5833% ( 222) 00:14:56.998 12483.048 - 12545.463: 38.1458% ( 246) 00:14:56.998 12545.463 - 12607.878: 40.8125% ( 256) 00:14:56.998 12607.878 - 12670.293: 43.2083% ( 230) 00:14:56.998 12670.293 - 12732.709: 45.6562% ( 235) 00:14:56.998 12732.709 - 12795.124: 47.9896% ( 224) 00:14:56.998 12795.124 - 12857.539: 50.3125% ( 223) 00:14:56.998 12857.539 - 12919.954: 52.5729% ( 217) 00:14:56.998 12919.954 - 12982.370: 55.0000% ( 233) 00:14:56.998 12982.370 - 13044.785: 57.1354% ( 205) 00:14:56.998 13044.785 - 13107.200: 59.3333% ( 211) 00:14:56.998 13107.200 - 13169.615: 61.3542% ( 194) 00:14:56.998 13169.615 - 13232.030: 63.2708% ( 184) 00:14:56.998 13232.030 - 13294.446: 65.4583% ( 210) 00:14:56.998 13294.446 - 13356.861: 67.2083% ( 168) 00:14:56.998 13356.861 - 13419.276: 68.9583% ( 168) 00:14:56.998 13419.276 - 13481.691: 70.5521% ( 153) 00:14:56.998 13481.691 - 13544.107: 71.9792% ( 137) 00:14:56.998 13544.107 - 13606.522: 73.4583% ( 142) 00:14:56.998 13606.522 - 13668.937: 74.8021% ( 129) 00:14:56.998 13668.937 - 13731.352: 76.2083% ( 135) 00:14:56.998 13731.352 - 13793.768: 77.4479% ( 119) 00:14:56.998 13793.768 - 13856.183: 78.5104% ( 102) 00:14:56.998 13856.183 - 13918.598: 79.5833% ( 103) 00:14:56.998 13918.598 - 13981.013: 80.5625% ( 94) 00:14:56.998 13981.013 - 14043.429: 81.4167% ( 82) 00:14:56.998 14043.429 - 14105.844: 82.1250% ( 68) 00:14:56.998 14105.844 - 14168.259: 82.7396% ( 59) 00:14:56.998 14168.259 - 14230.674: 83.3542% ( 59) 00:14:56.998 14230.674 - 14293.090: 84.0208% ( 64) 00:14:56.998 14293.090 - 14355.505: 84.5729% ( 53) 00:14:56.998 14355.505 - 14417.920: 85.1354% ( 54) 00:14:56.998 14417.920 - 14480.335: 85.5625% ( 41) 00:14:56.998 14480.335 - 14542.750: 85.9792% ( 40) 00:14:56.998 14542.750 - 14605.166: 86.3750% ( 38) 00:14:56.998 14605.166 - 14667.581: 86.7812% ( 39) 00:14:56.998 14667.581 - 14729.996: 87.2083% ( 41) 00:14:56.998 14729.996 - 14792.411: 87.7292% ( 50) 00:14:56.998 14792.411 - 14854.827: 88.4167% ( 66) 00:14:56.998 14854.827 - 14917.242: 88.9479% ( 51) 00:14:56.998 14917.242 - 14979.657: 89.4271% ( 46) 00:14:56.998 14979.657 - 15042.072: 89.8229% ( 38) 00:14:56.998 15042.072 - 15104.488: 90.1250% ( 29) 00:14:56.998 15104.488 - 15166.903: 90.4583% ( 32) 00:14:56.998 15166.903 - 15229.318: 90.7708% ( 30) 00:14:56.998 15229.318 - 15291.733: 91.0938% ( 31) 00:14:56.998 15291.733 - 15354.149: 91.5000% ( 39) 00:14:56.998 15354.149 - 15416.564: 91.9062% ( 39) 00:14:56.998 15416.564 - 15478.979: 92.2708% ( 35) 00:14:56.998 15478.979 - 15541.394: 92.5000% ( 22) 00:14:56.998 15541.394 - 15603.810: 92.8021% ( 29) 00:14:56.998 15603.810 - 15666.225: 93.1458% ( 33) 00:14:56.998 15666.225 - 15728.640: 93.4688% ( 31) 00:14:56.998 15728.640 - 15791.055: 93.6562% ( 18) 00:14:56.998 15791.055 - 15853.470: 94.0417% ( 37) 00:14:56.998 15853.470 - 15915.886: 94.3125% ( 26) 00:14:56.998 15915.886 - 15978.301: 94.5417% ( 22) 00:14:56.998 15978.301 - 16103.131: 94.8542% ( 30) 00:14:56.998 16103.131 - 16227.962: 95.1250% ( 26) 00:14:56.998 16227.962 - 16352.792: 95.4375% ( 30) 00:14:56.998 16352.792 - 16477.623: 95.8542% ( 40) 00:14:56.998 16477.623 - 16602.453: 96.2500% ( 38) 00:14:56.998 16602.453 - 16727.284: 96.8021% ( 53) 00:14:56.998 16727.284 - 16852.114: 97.0000% ( 19) 00:14:56.998 16852.114 - 16976.945: 97.1667% ( 16) 00:14:56.998 16976.945 - 17101.775: 97.3646% ( 19) 00:14:56.998 17101.775 - 17226.606: 97.5938% ( 22) 00:14:56.998 17226.606 - 17351.436: 97.8021% ( 20) 00:14:56.998 17351.436 - 17476.267: 98.0417% ( 23) 00:14:56.998 17476.267 - 17601.097: 98.2292% ( 18) 00:14:56.998 17601.097 - 17725.928: 98.3854% ( 15) 00:14:56.998 17725.928 - 17850.758: 98.5000% ( 11) 00:14:56.998 17850.758 - 17975.589: 98.5625% ( 6) 00:14:56.998 17975.589 - 18100.419: 98.6146% ( 5) 00:14:56.998 18100.419 - 18225.250: 98.6562% ( 4) 00:14:56.998 18225.250 - 18350.080: 98.6667% ( 1) 00:14:56.998 34453.211 - 34702.872: 98.7292% ( 6) 00:14:56.998 34702.872 - 34952.533: 98.8125% ( 8) 00:14:56.998 34952.533 - 35202.194: 98.8958% ( 8) 00:14:56.998 35202.194 - 35451.855: 98.9792% ( 8) 00:14:56.999 35451.855 - 35701.516: 99.0625% ( 8) 00:14:56.999 35701.516 - 35951.177: 99.1458% ( 8) 00:14:56.999 35951.177 - 36200.838: 99.2292% ( 8) 00:14:56.999 36200.838 - 36450.499: 99.3229% ( 9) 00:14:56.999 36450.499 - 36700.160: 99.3333% ( 1) 00:14:56.999 42192.701 - 42442.362: 99.3958% ( 6) 00:14:56.999 42442.362 - 42692.023: 99.4688% ( 7) 00:14:56.999 42692.023 - 42941.684: 99.5521% ( 8) 00:14:56.999 42941.684 - 43191.345: 99.6250% ( 7) 00:14:56.999 43191.345 - 43441.006: 99.6979% ( 7) 00:14:56.999 43441.006 - 43690.667: 99.8021% ( 10) 00:14:56.999 43690.667 - 43940.328: 99.8750% ( 7) 00:14:56.999 43940.328 - 44189.989: 99.9583% ( 8) 00:14:56.999 44189.989 - 44439.650: 100.0000% ( 4) 00:14:56.999 00:14:56.999 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:56.999 ============================================================================== 00:14:56.999 Range in us Cumulative IO count 00:14:56.999 10673.006 - 10735.421: 0.0104% ( 1) 00:14:56.999 10735.421 - 10797.836: 0.0521% ( 4) 00:14:56.999 10797.836 - 10860.251: 0.1458% ( 9) 00:14:56.999 10860.251 - 10922.667: 0.3021% ( 15) 00:14:56.999 10922.667 - 10985.082: 0.5417% ( 23) 00:14:56.999 10985.082 - 11047.497: 0.8646% ( 31) 00:14:56.999 11047.497 - 11109.912: 1.2917% ( 41) 00:14:56.999 11109.912 - 11172.328: 1.9062% ( 59) 00:14:56.999 11172.328 - 11234.743: 2.6667% ( 73) 00:14:56.999 11234.743 - 11297.158: 3.5833% ( 88) 00:14:56.999 11297.158 - 11359.573: 4.5938% ( 97) 00:14:56.999 11359.573 - 11421.989: 5.5417% ( 91) 00:14:56.999 11421.989 - 11484.404: 6.7292% ( 114) 00:14:56.999 11484.404 - 11546.819: 7.9792% ( 120) 00:14:56.999 11546.819 - 11609.234: 9.1771% ( 115) 00:14:56.999 11609.234 - 11671.650: 10.2604% ( 104) 00:14:56.999 11671.650 - 11734.065: 11.6667% ( 135) 00:14:56.999 11734.065 - 11796.480: 12.9583% ( 124) 00:14:56.999 11796.480 - 11858.895: 14.8333% ( 180) 00:14:56.999 11858.895 - 11921.310: 16.4583% ( 156) 00:14:56.999 11921.310 - 11983.726: 18.2396% ( 171) 00:14:56.999 11983.726 - 12046.141: 20.3646% ( 204) 00:14:56.999 12046.141 - 12108.556: 22.4375% ( 199) 00:14:56.999 12108.556 - 12170.971: 24.4583% ( 194) 00:14:56.999 12170.971 - 12233.387: 26.5104% ( 197) 00:14:56.999 12233.387 - 12295.802: 28.8854% ( 228) 00:14:56.999 12295.802 - 12358.217: 31.1771% ( 220) 00:14:56.999 12358.217 - 12420.632: 33.4479% ( 218) 00:14:56.999 12420.632 - 12483.048: 35.5521% ( 202) 00:14:56.999 12483.048 - 12545.463: 37.5208% ( 189) 00:14:56.999 12545.463 - 12607.878: 39.6667% ( 206) 00:14:56.999 12607.878 - 12670.293: 41.7604% ( 201) 00:14:56.999 12670.293 - 12732.709: 43.9583% ( 211) 00:14:56.999 12732.709 - 12795.124: 46.5000% ( 244) 00:14:56.999 12795.124 - 12857.539: 49.1146% ( 251) 00:14:56.999 12857.539 - 12919.954: 51.9792% ( 275) 00:14:56.999 12919.954 - 12982.370: 54.9167% ( 282) 00:14:56.999 12982.370 - 13044.785: 57.6146% ( 259) 00:14:56.999 13044.785 - 13107.200: 60.2604% ( 254) 00:14:56.999 13107.200 - 13169.615: 62.7604% ( 240) 00:14:56.999 13169.615 - 13232.030: 65.0104% ( 216) 00:14:56.999 13232.030 - 13294.446: 67.0833% ( 199) 00:14:56.999 13294.446 - 13356.861: 69.0938% ( 193) 00:14:56.999 13356.861 - 13419.276: 71.0625% ( 189) 00:14:56.999 13419.276 - 13481.691: 72.8646% ( 173) 00:14:56.999 13481.691 - 13544.107: 74.4167% ( 149) 00:14:56.999 13544.107 - 13606.522: 75.8021% ( 133) 00:14:56.999 13606.522 - 13668.937: 76.9479% ( 110) 00:14:56.999 13668.937 - 13731.352: 78.0208% ( 103) 00:14:56.999 13731.352 - 13793.768: 79.0104% ( 95) 00:14:56.999 13793.768 - 13856.183: 79.8958% ( 85) 00:14:56.999 13856.183 - 13918.598: 80.6979% ( 77) 00:14:56.999 13918.598 - 13981.013: 81.4896% ( 76) 00:14:56.999 13981.013 - 14043.429: 82.2604% ( 74) 00:14:56.999 14043.429 - 14105.844: 83.0729% ( 78) 00:14:56.999 14105.844 - 14168.259: 83.6875% ( 59) 00:14:56.999 14168.259 - 14230.674: 84.0833% ( 38) 00:14:56.999 14230.674 - 14293.090: 84.3958% ( 30) 00:14:56.999 14293.090 - 14355.505: 84.6875% ( 28) 00:14:56.999 14355.505 - 14417.920: 85.0833% ( 38) 00:14:56.999 14417.920 - 14480.335: 85.3854% ( 29) 00:14:56.999 14480.335 - 14542.750: 85.8542% ( 45) 00:14:56.999 14542.750 - 14605.166: 86.3750% ( 50) 00:14:56.999 14605.166 - 14667.581: 87.0208% ( 62) 00:14:56.999 14667.581 - 14729.996: 87.5521% ( 51) 00:14:56.999 14729.996 - 14792.411: 87.9688% ( 40) 00:14:56.999 14792.411 - 14854.827: 88.3958% ( 41) 00:14:56.999 14854.827 - 14917.242: 88.7083% ( 30) 00:14:56.999 14917.242 - 14979.657: 89.1562% ( 43) 00:14:56.999 14979.657 - 15042.072: 89.5625% ( 39) 00:14:56.999 15042.072 - 15104.488: 89.9583% ( 38) 00:14:56.999 15104.488 - 15166.903: 90.3438% ( 37) 00:14:56.999 15166.903 - 15229.318: 90.7292% ( 37) 00:14:56.999 15229.318 - 15291.733: 91.1667% ( 42) 00:14:56.999 15291.733 - 15354.149: 91.5000% ( 32) 00:14:56.999 15354.149 - 15416.564: 91.7396% ( 23) 00:14:56.999 15416.564 - 15478.979: 91.9688% ( 22) 00:14:56.999 15478.979 - 15541.394: 92.1458% ( 17) 00:14:56.999 15541.394 - 15603.810: 92.3333% ( 18) 00:14:56.999 15603.810 - 15666.225: 92.5000% ( 16) 00:14:56.999 15666.225 - 15728.640: 92.6667% ( 16) 00:14:56.999 15728.640 - 15791.055: 92.8333% ( 16) 00:14:56.999 15791.055 - 15853.470: 92.9688% ( 13) 00:14:56.999 15853.470 - 15915.886: 93.1458% ( 17) 00:14:56.999 15915.886 - 15978.301: 93.3750% ( 22) 00:14:56.999 15978.301 - 16103.131: 93.8958% ( 50) 00:14:56.999 16103.131 - 16227.962: 94.3958% ( 48) 00:14:56.999 16227.962 - 16352.792: 94.9271% ( 51) 00:14:56.999 16352.792 - 16477.623: 95.3854% ( 44) 00:14:56.999 16477.623 - 16602.453: 95.8958% ( 49) 00:14:56.999 16602.453 - 16727.284: 96.3958% ( 48) 00:14:56.999 16727.284 - 16852.114: 96.9062% ( 49) 00:14:56.999 16852.114 - 16976.945: 97.3646% ( 44) 00:14:56.999 16976.945 - 17101.775: 97.7604% ( 38) 00:14:56.999 17101.775 - 17226.606: 98.0312% ( 26) 00:14:56.999 17226.606 - 17351.436: 98.2812% ( 24) 00:14:56.999 17351.436 - 17476.267: 98.5000% ( 21) 00:14:56.999 17476.267 - 17601.097: 98.5729% ( 7) 00:14:56.999 17601.097 - 17725.928: 98.6042% ( 3) 00:14:56.999 17725.928 - 17850.758: 98.6562% ( 5) 00:14:56.999 17850.758 - 17975.589: 98.6667% ( 1) 00:14:56.999 32705.585 - 32955.246: 98.7604% ( 9) 00:14:56.999 32955.246 - 33204.907: 98.8438% ( 8) 00:14:56.999 33204.907 - 33454.568: 98.9271% ( 8) 00:14:56.999 33454.568 - 33704.229: 99.0104% ( 8) 00:14:56.999 33704.229 - 33953.890: 99.1042% ( 9) 00:14:56.999 33953.890 - 34203.550: 99.1979% ( 9) 00:14:56.999 34203.550 - 34453.211: 99.2812% ( 8) 00:14:56.999 34453.211 - 34702.872: 99.3333% ( 5) 00:14:56.999 40445.074 - 40694.735: 99.3542% ( 2) 00:14:56.999 40694.735 - 40944.396: 99.4375% ( 8) 00:14:56.999 40944.396 - 41194.057: 99.5208% ( 8) 00:14:56.999 41194.057 - 41443.718: 99.6146% ( 9) 00:14:56.999 41443.718 - 41693.379: 99.7083% ( 9) 00:14:56.999 41693.379 - 41943.040: 99.7812% ( 7) 00:14:56.999 41943.040 - 42192.701: 99.8750% ( 9) 00:14:56.999 42192.701 - 42442.362: 99.9583% ( 8) 00:14:56.999 42442.362 - 42692.023: 100.0000% ( 4) 00:14:56.999 00:14:56.999 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:56.999 ============================================================================== 00:14:56.999 Range in us Cumulative IO count 00:14:56.999 10860.251 - 10922.667: 0.0104% ( 1) 00:14:56.999 10922.667 - 10985.082: 0.0833% ( 7) 00:14:56.999 10985.082 - 11047.497: 0.2188% ( 13) 00:14:56.999 11047.497 - 11109.912: 0.4167% ( 19) 00:14:56.999 11109.912 - 11172.328: 0.7500% ( 32) 00:14:56.999 11172.328 - 11234.743: 1.4479% ( 67) 00:14:56.999 11234.743 - 11297.158: 2.4271% ( 94) 00:14:56.999 11297.158 - 11359.573: 3.5312% ( 106) 00:14:56.999 11359.573 - 11421.989: 4.8542% ( 127) 00:14:56.999 11421.989 - 11484.404: 6.5729% ( 165) 00:14:56.999 11484.404 - 11546.819: 8.0833% ( 145) 00:14:56.999 11546.819 - 11609.234: 9.5938% ( 145) 00:14:56.999 11609.234 - 11671.650: 10.9688% ( 132) 00:14:56.999 11671.650 - 11734.065: 12.4583% ( 143) 00:14:56.999 11734.065 - 11796.480: 14.1875% ( 166) 00:14:56.999 11796.480 - 11858.895: 15.9792% ( 172) 00:14:56.999 11858.895 - 11921.310: 17.9896% ( 193) 00:14:56.999 11921.310 - 11983.726: 20.0000% ( 193) 00:14:56.999 11983.726 - 12046.141: 22.0000% ( 192) 00:14:56.999 12046.141 - 12108.556: 24.0625% ( 198) 00:14:56.999 12108.556 - 12170.971: 26.1667% ( 202) 00:14:56.999 12170.971 - 12233.387: 28.4896% ( 223) 00:14:56.999 12233.387 - 12295.802: 30.6667% ( 209) 00:14:56.999 12295.802 - 12358.217: 32.8021% ( 205) 00:14:56.999 12358.217 - 12420.632: 34.9375% ( 205) 00:14:56.999 12420.632 - 12483.048: 37.3021% ( 227) 00:14:56.999 12483.048 - 12545.463: 39.5833% ( 219) 00:14:56.999 12545.463 - 12607.878: 41.5417% ( 188) 00:14:56.999 12607.878 - 12670.293: 43.8958% ( 226) 00:14:56.999 12670.293 - 12732.709: 46.4271% ( 243) 00:14:56.999 12732.709 - 12795.124: 48.9271% ( 240) 00:14:56.999 12795.124 - 12857.539: 51.2396% ( 222) 00:14:56.999 12857.539 - 12919.954: 53.5938% ( 226) 00:14:56.999 12919.954 - 12982.370: 56.0208% ( 233) 00:14:56.999 12982.370 - 13044.785: 58.4167% ( 230) 00:14:56.999 13044.785 - 13107.200: 60.5938% ( 209) 00:14:56.999 13107.200 - 13169.615: 62.7292% ( 205) 00:14:56.999 13169.615 - 13232.030: 64.8854% ( 207) 00:14:56.999 13232.030 - 13294.446: 67.1771% ( 220) 00:14:56.999 13294.446 - 13356.861: 69.0208% ( 177) 00:14:56.999 13356.861 - 13419.276: 70.8333% ( 174) 00:14:56.999 13419.276 - 13481.691: 72.3229% ( 143) 00:14:56.999 13481.691 - 13544.107: 73.7812% ( 140) 00:14:56.999 13544.107 - 13606.522: 75.1875% ( 135) 00:14:56.999 13606.522 - 13668.937: 76.3750% ( 114) 00:14:56.999 13668.937 - 13731.352: 77.5208% ( 110) 00:14:56.999 13731.352 - 13793.768: 78.5521% ( 99) 00:14:56.999 13793.768 - 13856.183: 79.5104% ( 92) 00:14:56.999 13856.183 - 13918.598: 80.3333% ( 79) 00:14:57.000 13918.598 - 13981.013: 81.0625% ( 70) 00:14:57.000 13981.013 - 14043.429: 81.7292% ( 64) 00:14:57.000 14043.429 - 14105.844: 82.2708% ( 52) 00:14:57.000 14105.844 - 14168.259: 82.7917% ( 50) 00:14:57.000 14168.259 - 14230.674: 83.2396% ( 43) 00:14:57.000 14230.674 - 14293.090: 83.6771% ( 42) 00:14:57.000 14293.090 - 14355.505: 84.1771% ( 48) 00:14:57.000 14355.505 - 14417.920: 84.6667% ( 47) 00:14:57.000 14417.920 - 14480.335: 85.2292% ( 54) 00:14:57.000 14480.335 - 14542.750: 85.7396% ( 49) 00:14:57.000 14542.750 - 14605.166: 86.2292% ( 47) 00:14:57.000 14605.166 - 14667.581: 86.5938% ( 35) 00:14:57.000 14667.581 - 14729.996: 86.9062% ( 30) 00:14:57.000 14729.996 - 14792.411: 87.2812% ( 36) 00:14:57.000 14792.411 - 14854.827: 87.7708% ( 47) 00:14:57.000 14854.827 - 14917.242: 88.1562% ( 37) 00:14:57.000 14917.242 - 14979.657: 88.5104% ( 34) 00:14:57.000 14979.657 - 15042.072: 88.8750% ( 35) 00:14:57.000 15042.072 - 15104.488: 89.3125% ( 42) 00:14:57.000 15104.488 - 15166.903: 89.8646% ( 53) 00:14:57.000 15166.903 - 15229.318: 90.3750% ( 49) 00:14:57.000 15229.318 - 15291.733: 90.9271% ( 53) 00:14:57.000 15291.733 - 15354.149: 91.3750% ( 43) 00:14:57.000 15354.149 - 15416.564: 91.6979% ( 31) 00:14:57.000 15416.564 - 15478.979: 91.9062% ( 20) 00:14:57.000 15478.979 - 15541.394: 92.1250% ( 21) 00:14:57.000 15541.394 - 15603.810: 92.3646% ( 23) 00:14:57.000 15603.810 - 15666.225: 92.6250% ( 25) 00:14:57.000 15666.225 - 15728.640: 92.8021% ( 17) 00:14:57.000 15728.640 - 15791.055: 93.0208% ( 21) 00:14:57.000 15791.055 - 15853.470: 93.2188% ( 19) 00:14:57.000 15853.470 - 15915.886: 93.4062% ( 18) 00:14:57.000 15915.886 - 15978.301: 93.6354% ( 22) 00:14:57.000 15978.301 - 16103.131: 94.0104% ( 36) 00:14:57.000 16103.131 - 16227.962: 94.3854% ( 36) 00:14:57.000 16227.962 - 16352.792: 94.6667% ( 27) 00:14:57.000 16352.792 - 16477.623: 94.9792% ( 30) 00:14:57.000 16477.623 - 16602.453: 95.3438% ( 35) 00:14:57.000 16602.453 - 16727.284: 95.8646% ( 50) 00:14:57.000 16727.284 - 16852.114: 96.5000% ( 61) 00:14:57.000 16852.114 - 16976.945: 97.0938% ( 57) 00:14:57.000 16976.945 - 17101.775: 97.5729% ( 46) 00:14:57.000 17101.775 - 17226.606: 97.8438% ( 26) 00:14:57.000 17226.606 - 17351.436: 98.1250% ( 27) 00:14:57.000 17351.436 - 17476.267: 98.3438% ( 21) 00:14:57.000 17476.267 - 17601.097: 98.4583% ( 11) 00:14:57.000 17601.097 - 17725.928: 98.5417% ( 8) 00:14:57.000 17725.928 - 17850.758: 98.5938% ( 5) 00:14:57.000 17850.758 - 17975.589: 98.6458% ( 5) 00:14:57.000 17975.589 - 18100.419: 98.6667% ( 2) 00:14:57.000 31956.602 - 32206.263: 98.7396% ( 7) 00:14:57.000 32206.263 - 32455.924: 98.8333% ( 9) 00:14:57.000 32455.924 - 32705.585: 98.9271% ( 9) 00:14:57.000 32705.585 - 32955.246: 99.0208% ( 9) 00:14:57.000 32955.246 - 33204.907: 99.1146% ( 9) 00:14:57.000 33204.907 - 33454.568: 99.2083% ( 9) 00:14:57.000 33454.568 - 33704.229: 99.2917% ( 8) 00:14:57.000 33704.229 - 33953.890: 99.3333% ( 4) 00:14:57.000 39696.091 - 39945.752: 99.4167% ( 8) 00:14:57.000 39945.752 - 40195.413: 99.5000% ( 8) 00:14:57.000 40195.413 - 40445.074: 99.5938% ( 9) 00:14:57.000 40445.074 - 40694.735: 99.6771% ( 8) 00:14:57.000 40694.735 - 40944.396: 99.7604% ( 8) 00:14:57.000 40944.396 - 41194.057: 99.8542% ( 9) 00:14:57.000 41194.057 - 41443.718: 99.9479% ( 9) 00:14:57.000 41443.718 - 41693.379: 100.0000% ( 5) 00:14:57.000 00:14:57.000 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:57.000 ============================================================================== 00:14:57.000 Range in us Cumulative IO count 00:14:57.000 10860.251 - 10922.667: 0.0104% ( 1) 00:14:57.000 10922.667 - 10985.082: 0.1458% ( 13) 00:14:57.000 10985.082 - 11047.497: 0.3229% ( 17) 00:14:57.000 11047.497 - 11109.912: 0.5312% ( 20) 00:14:57.000 11109.912 - 11172.328: 0.8958% ( 35) 00:14:57.000 11172.328 - 11234.743: 1.5208% ( 60) 00:14:57.000 11234.743 - 11297.158: 2.2396% ( 69) 00:14:57.000 11297.158 - 11359.573: 3.3438% ( 106) 00:14:57.000 11359.573 - 11421.989: 4.8021% ( 140) 00:14:57.000 11421.989 - 11484.404: 6.5000% ( 163) 00:14:57.000 11484.404 - 11546.819: 7.9583% ( 140) 00:14:57.000 11546.819 - 11609.234: 9.6458% ( 162) 00:14:57.000 11609.234 - 11671.650: 11.1354% ( 143) 00:14:57.000 11671.650 - 11734.065: 12.7500% ( 155) 00:14:57.000 11734.065 - 11796.480: 14.0938% ( 129) 00:14:57.000 11796.480 - 11858.895: 15.5208% ( 137) 00:14:57.000 11858.895 - 11921.310: 17.1354% ( 155) 00:14:57.000 11921.310 - 11983.726: 18.8854% ( 168) 00:14:57.000 11983.726 - 12046.141: 20.8646% ( 190) 00:14:57.000 12046.141 - 12108.556: 22.9896% ( 204) 00:14:57.000 12108.556 - 12170.971: 25.7188% ( 262) 00:14:57.000 12170.971 - 12233.387: 28.1458% ( 233) 00:14:57.000 12233.387 - 12295.802: 30.2812% ( 205) 00:14:57.000 12295.802 - 12358.217: 32.5417% ( 217) 00:14:57.000 12358.217 - 12420.632: 34.9271% ( 229) 00:14:57.000 12420.632 - 12483.048: 37.2708% ( 225) 00:14:57.000 12483.048 - 12545.463: 39.5312% ( 217) 00:14:57.000 12545.463 - 12607.878: 41.8750% ( 225) 00:14:57.000 12607.878 - 12670.293: 44.3021% ( 233) 00:14:57.000 12670.293 - 12732.709: 46.6771% ( 228) 00:14:57.000 12732.709 - 12795.124: 48.7083% ( 195) 00:14:57.000 12795.124 - 12857.539: 50.7188% ( 193) 00:14:57.000 12857.539 - 12919.954: 52.5417% ( 175) 00:14:57.000 12919.954 - 12982.370: 54.5729% ( 195) 00:14:57.000 12982.370 - 13044.785: 56.7708% ( 211) 00:14:57.000 13044.785 - 13107.200: 59.2917% ( 242) 00:14:57.000 13107.200 - 13169.615: 61.7396% ( 235) 00:14:57.000 13169.615 - 13232.030: 64.0417% ( 221) 00:14:57.000 13232.030 - 13294.446: 66.4062% ( 227) 00:14:57.000 13294.446 - 13356.861: 68.6042% ( 211) 00:14:57.000 13356.861 - 13419.276: 70.1354% ( 147) 00:14:57.000 13419.276 - 13481.691: 71.6562% ( 146) 00:14:57.000 13481.691 - 13544.107: 73.2708% ( 155) 00:14:57.000 13544.107 - 13606.522: 74.5625% ( 124) 00:14:57.000 13606.522 - 13668.937: 75.8750% ( 126) 00:14:57.000 13668.937 - 13731.352: 76.9375% ( 102) 00:14:57.000 13731.352 - 13793.768: 77.9167% ( 94) 00:14:57.000 13793.768 - 13856.183: 78.8854% ( 93) 00:14:57.000 13856.183 - 13918.598: 79.6042% ( 69) 00:14:57.000 13918.598 - 13981.013: 80.3438% ( 71) 00:14:57.000 13981.013 - 14043.429: 81.1771% ( 80) 00:14:57.000 14043.429 - 14105.844: 81.8854% ( 68) 00:14:57.000 14105.844 - 14168.259: 82.6875% ( 77) 00:14:57.000 14168.259 - 14230.674: 83.5521% ( 83) 00:14:57.000 14230.674 - 14293.090: 84.3542% ( 77) 00:14:57.000 14293.090 - 14355.505: 84.8958% ( 52) 00:14:57.000 14355.505 - 14417.920: 85.4062% ( 49) 00:14:57.000 14417.920 - 14480.335: 85.8333% ( 41) 00:14:57.000 14480.335 - 14542.750: 86.2292% ( 38) 00:14:57.000 14542.750 - 14605.166: 86.6146% ( 37) 00:14:57.000 14605.166 - 14667.581: 86.9479% ( 32) 00:14:57.000 14667.581 - 14729.996: 87.3229% ( 36) 00:14:57.000 14729.996 - 14792.411: 87.8333% ( 49) 00:14:57.000 14792.411 - 14854.827: 88.3333% ( 48) 00:14:57.000 14854.827 - 14917.242: 88.7500% ( 40) 00:14:57.000 14917.242 - 14979.657: 89.1146% ( 35) 00:14:57.000 14979.657 - 15042.072: 89.5312% ( 40) 00:14:57.000 15042.072 - 15104.488: 90.1250% ( 57) 00:14:57.000 15104.488 - 15166.903: 90.5417% ( 40) 00:14:57.000 15166.903 - 15229.318: 90.9167% ( 36) 00:14:57.000 15229.318 - 15291.733: 91.1771% ( 25) 00:14:57.000 15291.733 - 15354.149: 91.4792% ( 29) 00:14:57.000 15354.149 - 15416.564: 91.7292% ( 24) 00:14:57.000 15416.564 - 15478.979: 91.9896% ( 25) 00:14:57.000 15478.979 - 15541.394: 92.3229% ( 32) 00:14:57.000 15541.394 - 15603.810: 92.6146% ( 28) 00:14:57.000 15603.810 - 15666.225: 92.9062% ( 28) 00:14:57.000 15666.225 - 15728.640: 93.2083% ( 29) 00:14:57.000 15728.640 - 15791.055: 93.4479% ( 23) 00:14:57.000 15791.055 - 15853.470: 93.7396% ( 28) 00:14:57.000 15853.470 - 15915.886: 94.0208% ( 27) 00:14:57.000 15915.886 - 15978.301: 94.2812% ( 25) 00:14:57.000 15978.301 - 16103.131: 94.5625% ( 27) 00:14:57.000 16103.131 - 16227.962: 94.7396% ( 17) 00:14:57.000 16227.962 - 16352.792: 94.9583% ( 21) 00:14:57.000 16352.792 - 16477.623: 95.1562% ( 19) 00:14:57.000 16477.623 - 16602.453: 95.3125% ( 15) 00:14:57.000 16602.453 - 16727.284: 95.5625% ( 24) 00:14:57.000 16727.284 - 16852.114: 95.9062% ( 33) 00:14:57.000 16852.114 - 16976.945: 96.1875% ( 27) 00:14:57.000 16976.945 - 17101.775: 96.6354% ( 43) 00:14:57.000 17101.775 - 17226.606: 96.9375% ( 29) 00:14:57.000 17226.606 - 17351.436: 97.2500% ( 30) 00:14:57.000 17351.436 - 17476.267: 97.5833% ( 32) 00:14:57.000 17476.267 - 17601.097: 97.8958% ( 30) 00:14:57.000 17601.097 - 17725.928: 98.1250% ( 22) 00:14:57.000 17725.928 - 17850.758: 98.3542% ( 22) 00:14:57.000 17850.758 - 17975.589: 98.5312% ( 17) 00:14:57.000 17975.589 - 18100.419: 98.6354% ( 10) 00:14:57.000 18100.419 - 18225.250: 98.6667% ( 3) 00:14:57.000 29959.314 - 30084.145: 98.6979% ( 3) 00:14:57.000 30084.145 - 30208.975: 98.7396% ( 4) 00:14:57.000 30208.975 - 30333.806: 98.7812% ( 4) 00:14:57.000 30333.806 - 30458.636: 98.8333% ( 5) 00:14:57.000 30458.636 - 30583.467: 98.8750% ( 4) 00:14:57.000 30583.467 - 30708.297: 98.9271% ( 5) 00:14:57.000 30708.297 - 30833.128: 98.9688% ( 4) 00:14:57.000 30833.128 - 30957.958: 99.0104% ( 4) 00:14:57.000 30957.958 - 31082.789: 99.0521% ( 4) 00:14:57.000 31082.789 - 31207.619: 99.0938% ( 4) 00:14:57.000 31207.619 - 31332.450: 99.1458% ( 5) 00:14:57.000 31332.450 - 31457.280: 99.1875% ( 4) 00:14:57.000 31457.280 - 31582.110: 99.2292% ( 4) 00:14:57.000 31582.110 - 31706.941: 99.2708% ( 4) 00:14:57.000 31706.941 - 31831.771: 99.3229% ( 5) 00:14:57.000 31831.771 - 31956.602: 99.3333% ( 1) 00:14:57.000 37449.143 - 37698.804: 99.3438% ( 1) 00:14:57.000 37698.804 - 37948.465: 99.4271% ( 8) 00:14:57.000 37948.465 - 38198.126: 99.5208% ( 9) 00:14:57.000 38198.126 - 38447.787: 99.6042% ( 8) 00:14:57.000 38447.787 - 38697.448: 99.6875% ( 8) 00:14:57.001 38697.448 - 38947.109: 99.7812% ( 9) 00:14:57.001 38947.109 - 39196.770: 99.8646% ( 8) 00:14:57.001 39196.770 - 39446.430: 99.9583% ( 9) 00:14:57.001 39446.430 - 39696.091: 100.0000% ( 4) 00:14:57.001 00:14:57.001 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:57.001 ============================================================================== 00:14:57.001 Range in us Cumulative IO count 00:14:57.001 10797.836 - 10860.251: 0.0104% ( 1) 00:14:57.001 10860.251 - 10922.667: 0.1146% ( 10) 00:14:57.001 10922.667 - 10985.082: 0.3333% ( 21) 00:14:57.001 10985.082 - 11047.497: 0.6458% ( 30) 00:14:57.001 11047.497 - 11109.912: 1.0000% ( 34) 00:14:57.001 11109.912 - 11172.328: 1.5104% ( 49) 00:14:57.001 11172.328 - 11234.743: 2.1979% ( 66) 00:14:57.001 11234.743 - 11297.158: 2.9896% ( 76) 00:14:57.001 11297.158 - 11359.573: 4.1771% ( 114) 00:14:57.001 11359.573 - 11421.989: 5.2917% ( 107) 00:14:57.001 11421.989 - 11484.404: 6.5938% ( 125) 00:14:57.001 11484.404 - 11546.819: 7.8854% ( 124) 00:14:57.001 11546.819 - 11609.234: 9.2396% ( 130) 00:14:57.001 11609.234 - 11671.650: 10.8125% ( 151) 00:14:57.001 11671.650 - 11734.065: 12.4375% ( 156) 00:14:57.001 11734.065 - 11796.480: 14.0625% ( 156) 00:14:57.001 11796.480 - 11858.895: 15.5625% ( 144) 00:14:57.001 11858.895 - 11921.310: 17.1979% ( 157) 00:14:57.001 11921.310 - 11983.726: 18.8958% ( 163) 00:14:57.001 11983.726 - 12046.141: 20.6042% ( 164) 00:14:57.001 12046.141 - 12108.556: 22.6146% ( 193) 00:14:57.001 12108.556 - 12170.971: 24.6667% ( 197) 00:14:57.001 12170.971 - 12233.387: 26.9792% ( 222) 00:14:57.001 12233.387 - 12295.802: 29.2604% ( 219) 00:14:57.001 12295.802 - 12358.217: 31.4583% ( 211) 00:14:57.001 12358.217 - 12420.632: 33.8750% ( 232) 00:14:57.001 12420.632 - 12483.048: 36.4792% ( 250) 00:14:57.001 12483.048 - 12545.463: 38.8125% ( 224) 00:14:57.001 12545.463 - 12607.878: 41.1562% ( 225) 00:14:57.001 12607.878 - 12670.293: 43.7917% ( 253) 00:14:57.001 12670.293 - 12732.709: 46.4167% ( 252) 00:14:57.001 12732.709 - 12795.124: 48.9271% ( 241) 00:14:57.001 12795.124 - 12857.539: 51.1458% ( 213) 00:14:57.001 12857.539 - 12919.954: 53.3021% ( 207) 00:14:57.001 12919.954 - 12982.370: 55.2812% ( 190) 00:14:57.001 12982.370 - 13044.785: 57.4167% ( 205) 00:14:57.001 13044.785 - 13107.200: 59.4792% ( 198) 00:14:57.001 13107.200 - 13169.615: 61.8438% ( 227) 00:14:57.001 13169.615 - 13232.030: 63.8854% ( 196) 00:14:57.001 13232.030 - 13294.446: 65.8750% ( 191) 00:14:57.001 13294.446 - 13356.861: 67.8750% ( 192) 00:14:57.001 13356.861 - 13419.276: 69.4792% ( 154) 00:14:57.001 13419.276 - 13481.691: 71.0000% ( 146) 00:14:57.001 13481.691 - 13544.107: 72.4792% ( 142) 00:14:57.001 13544.107 - 13606.522: 73.7917% ( 126) 00:14:57.001 13606.522 - 13668.937: 75.1979% ( 135) 00:14:57.001 13668.937 - 13731.352: 76.6667% ( 141) 00:14:57.001 13731.352 - 13793.768: 77.8854% ( 117) 00:14:57.001 13793.768 - 13856.183: 78.8229% ( 90) 00:14:57.001 13856.183 - 13918.598: 79.6667% ( 81) 00:14:57.001 13918.598 - 13981.013: 80.4167% ( 72) 00:14:57.001 13981.013 - 14043.429: 81.1875% ( 74) 00:14:57.001 14043.429 - 14105.844: 81.8125% ( 60) 00:14:57.001 14105.844 - 14168.259: 82.3750% ( 54) 00:14:57.001 14168.259 - 14230.674: 82.9792% ( 58) 00:14:57.001 14230.674 - 14293.090: 83.4271% ( 43) 00:14:57.001 14293.090 - 14355.505: 83.8958% ( 45) 00:14:57.001 14355.505 - 14417.920: 84.4375% ( 52) 00:14:57.001 14417.920 - 14480.335: 84.9896% ( 53) 00:14:57.001 14480.335 - 14542.750: 85.4896% ( 48) 00:14:57.001 14542.750 - 14605.166: 86.0208% ( 51) 00:14:57.001 14605.166 - 14667.581: 86.6875% ( 64) 00:14:57.001 14667.581 - 14729.996: 87.2604% ( 55) 00:14:57.001 14729.996 - 14792.411: 87.7500% ( 47) 00:14:57.001 14792.411 - 14854.827: 88.2604% ( 49) 00:14:57.001 14854.827 - 14917.242: 88.8021% ( 52) 00:14:57.001 14917.242 - 14979.657: 89.2396% ( 42) 00:14:57.001 14979.657 - 15042.072: 89.6354% ( 38) 00:14:57.001 15042.072 - 15104.488: 90.0938% ( 44) 00:14:57.001 15104.488 - 15166.903: 90.5729% ( 46) 00:14:57.001 15166.903 - 15229.318: 90.9583% ( 37) 00:14:57.001 15229.318 - 15291.733: 91.3438% ( 37) 00:14:57.001 15291.733 - 15354.149: 91.6562% ( 30) 00:14:57.001 15354.149 - 15416.564: 91.9792% ( 31) 00:14:57.001 15416.564 - 15478.979: 92.2292% ( 24) 00:14:57.001 15478.979 - 15541.394: 92.5417% ( 30) 00:14:57.001 15541.394 - 15603.810: 92.8646% ( 31) 00:14:57.001 15603.810 - 15666.225: 93.1771% ( 30) 00:14:57.001 15666.225 - 15728.640: 93.5104% ( 32) 00:14:57.001 15728.640 - 15791.055: 93.8021% ( 28) 00:14:57.001 15791.055 - 15853.470: 94.0729% ( 26) 00:14:57.001 15853.470 - 15915.886: 94.2917% ( 21) 00:14:57.001 15915.886 - 15978.301: 94.4583% ( 16) 00:14:57.001 15978.301 - 16103.131: 94.7292% ( 26) 00:14:57.001 16103.131 - 16227.962: 94.9792% ( 24) 00:14:57.001 16227.962 - 16352.792: 95.1250% ( 14) 00:14:57.001 16352.792 - 16477.623: 95.2708% ( 14) 00:14:57.001 16477.623 - 16602.453: 95.3438% ( 7) 00:14:57.001 16602.453 - 16727.284: 95.4479% ( 10) 00:14:57.001 16727.284 - 16852.114: 95.8125% ( 35) 00:14:57.001 16852.114 - 16976.945: 96.1771% ( 35) 00:14:57.001 16976.945 - 17101.775: 96.5625% ( 37) 00:14:57.001 17101.775 - 17226.606: 96.8438% ( 27) 00:14:57.001 17226.606 - 17351.436: 97.1042% ( 25) 00:14:57.001 17351.436 - 17476.267: 97.3854% ( 27) 00:14:57.001 17476.267 - 17601.097: 97.6667% ( 27) 00:14:57.001 17601.097 - 17725.928: 97.8854% ( 21) 00:14:57.001 17725.928 - 17850.758: 98.0625% ( 17) 00:14:57.001 17850.758 - 17975.589: 98.2083% ( 14) 00:14:57.001 17975.589 - 18100.419: 98.3438% ( 13) 00:14:57.001 18100.419 - 18225.250: 98.4479% ( 10) 00:14:57.001 18225.250 - 18350.080: 98.4792% ( 3) 00:14:57.001 18350.080 - 18474.910: 98.5729% ( 9) 00:14:57.001 18474.910 - 18599.741: 98.6042% ( 3) 00:14:57.001 18599.741 - 18724.571: 98.6250% ( 2) 00:14:57.001 18724.571 - 18849.402: 98.6458% ( 2) 00:14:57.001 18849.402 - 18974.232: 98.6667% ( 2) 00:14:57.001 27837.196 - 27962.027: 98.6875% ( 2) 00:14:57.001 27962.027 - 28086.857: 98.7396% ( 5) 00:14:57.001 28086.857 - 28211.688: 98.7812% ( 4) 00:14:57.001 28211.688 - 28336.518: 98.8229% ( 4) 00:14:57.001 28336.518 - 28461.349: 98.8646% ( 4) 00:14:57.001 28461.349 - 28586.179: 98.9167% ( 5) 00:14:57.001 28586.179 - 28711.010: 98.9583% ( 4) 00:14:57.001 28711.010 - 28835.840: 99.0000% ( 4) 00:14:57.001 28835.840 - 28960.670: 99.0417% ( 4) 00:14:57.001 28960.670 - 29085.501: 99.0833% ( 4) 00:14:57.001 29085.501 - 29210.331: 99.1250% ( 4) 00:14:57.001 29210.331 - 29335.162: 99.1771% ( 5) 00:14:57.001 29335.162 - 29459.992: 99.2188% ( 4) 00:14:57.001 29459.992 - 29584.823: 99.2604% ( 4) 00:14:57.001 29584.823 - 29709.653: 99.3125% ( 5) 00:14:57.001 29709.653 - 29834.484: 99.3333% ( 2) 00:14:57.001 35951.177 - 36200.838: 99.3958% ( 6) 00:14:57.001 36200.838 - 36450.499: 99.4792% ( 8) 00:14:57.001 36450.499 - 36700.160: 99.5729% ( 9) 00:14:57.001 36700.160 - 36949.821: 99.6562% ( 8) 00:14:57.001 36949.821 - 37199.482: 99.7500% ( 9) 00:14:57.001 37199.482 - 37449.143: 99.8438% ( 9) 00:14:57.001 37449.143 - 37698.804: 99.9375% ( 9) 00:14:57.001 37698.804 - 37948.465: 100.0000% ( 6) 00:14:57.001 00:14:57.001 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:57.001 ============================================================================== 00:14:57.001 Range in us Cumulative IO count 00:14:57.001 10673.006 - 10735.421: 0.0103% ( 1) 00:14:57.001 10735.421 - 10797.836: 0.0310% ( 2) 00:14:57.001 10797.836 - 10860.251: 0.0724% ( 4) 00:14:57.001 10860.251 - 10922.667: 0.2070% ( 13) 00:14:57.001 10922.667 - 10985.082: 0.3518% ( 14) 00:14:57.001 10985.082 - 11047.497: 0.6519% ( 29) 00:14:57.001 11047.497 - 11109.912: 1.0244% ( 36) 00:14:57.001 11109.912 - 11172.328: 1.5418% ( 50) 00:14:57.001 11172.328 - 11234.743: 2.3179% ( 75) 00:14:57.001 11234.743 - 11297.158: 3.1560% ( 81) 00:14:57.001 11297.158 - 11359.573: 4.2012% ( 101) 00:14:57.001 11359.573 - 11421.989: 5.2980% ( 106) 00:14:57.001 11421.989 - 11484.404: 6.4052% ( 107) 00:14:57.001 11484.404 - 11546.819: 7.6366% ( 119) 00:14:57.001 11546.819 - 11609.234: 8.8576% ( 118) 00:14:57.001 11609.234 - 11671.650: 10.4098% ( 150) 00:14:57.001 11671.650 - 11734.065: 12.0137% ( 155) 00:14:57.001 11734.065 - 11796.480: 13.5969% ( 153) 00:14:57.001 11796.480 - 11858.895: 15.2939% ( 164) 00:14:57.001 11858.895 - 11921.310: 16.8667% ( 152) 00:14:57.001 11921.310 - 11983.726: 18.3257% ( 141) 00:14:57.001 11983.726 - 12046.141: 20.1469% ( 176) 00:14:57.001 12046.141 - 12108.556: 21.8957% ( 169) 00:14:57.001 12108.556 - 12170.971: 23.8307% ( 187) 00:14:57.001 12170.971 - 12233.387: 25.9830% ( 208) 00:14:57.001 12233.387 - 12295.802: 28.5286% ( 246) 00:14:57.001 12295.802 - 12358.217: 30.9189% ( 231) 00:14:57.001 12358.217 - 12420.632: 33.4851% ( 248) 00:14:57.001 12420.632 - 12483.048: 36.1858% ( 261) 00:14:57.001 12483.048 - 12545.463: 38.8659% ( 259) 00:14:57.001 12545.463 - 12607.878: 41.4218% ( 247) 00:14:57.001 12607.878 - 12670.293: 43.7707% ( 227) 00:14:57.001 12670.293 - 12732.709: 46.3059% ( 245) 00:14:57.001 12732.709 - 12795.124: 48.5410% ( 216) 00:14:57.001 12795.124 - 12857.539: 50.8589% ( 224) 00:14:57.001 12857.539 - 12919.954: 53.1664% ( 223) 00:14:57.001 12919.954 - 12982.370: 55.5257% ( 228) 00:14:57.001 12982.370 - 13044.785: 57.6366% ( 204) 00:14:57.001 13044.785 - 13107.200: 59.8096% ( 210) 00:14:57.001 13107.200 - 13169.615: 62.2310% ( 234) 00:14:57.001 13169.615 - 13232.030: 64.3108% ( 201) 00:14:57.001 13232.030 - 13294.446: 66.3493% ( 197) 00:14:57.001 13294.446 - 13356.861: 68.2947% ( 188) 00:14:57.002 13356.861 - 13419.276: 70.0745% ( 172) 00:14:57.002 13419.276 - 13481.691: 71.6474% ( 152) 00:14:57.002 13481.691 - 13544.107: 73.2409% ( 154) 00:14:57.002 13544.107 - 13606.522: 74.3998% ( 112) 00:14:57.002 13606.522 - 13668.937: 75.7657% ( 132) 00:14:57.002 13668.937 - 13731.352: 76.9247% ( 112) 00:14:57.002 13731.352 - 13793.768: 77.9594% ( 100) 00:14:57.002 13793.768 - 13856.183: 78.9632% ( 97) 00:14:57.002 13856.183 - 13918.598: 79.7082% ( 72) 00:14:57.002 13918.598 - 13981.013: 80.1842% ( 46) 00:14:57.002 13981.013 - 14043.429: 80.8568% ( 65) 00:14:57.002 14043.429 - 14105.844: 81.6122% ( 73) 00:14:57.002 14105.844 - 14168.259: 82.3365% ( 70) 00:14:57.002 14168.259 - 14230.674: 82.9470% ( 59) 00:14:57.002 14230.674 - 14293.090: 83.5161% ( 55) 00:14:57.002 14293.090 - 14355.505: 84.0025% ( 47) 00:14:57.002 14355.505 - 14417.920: 84.4992% ( 48) 00:14:57.002 14417.920 - 14480.335: 84.9752% ( 46) 00:14:57.002 14480.335 - 14542.750: 85.4098% ( 42) 00:14:57.002 14542.750 - 14605.166: 85.8237% ( 40) 00:14:57.002 14605.166 - 14667.581: 86.2893% ( 45) 00:14:57.002 14667.581 - 14729.996: 86.7757% ( 47) 00:14:57.002 14729.996 - 14792.411: 87.2517% ( 46) 00:14:57.002 14792.411 - 14854.827: 87.7794% ( 51) 00:14:57.002 14854.827 - 14917.242: 88.2761% ( 48) 00:14:57.002 14917.242 - 14979.657: 88.7624% ( 47) 00:14:57.002 14979.657 - 15042.072: 89.1660% ( 39) 00:14:57.002 15042.072 - 15104.488: 89.7144% ( 53) 00:14:57.002 15104.488 - 15166.903: 90.4077% ( 67) 00:14:57.002 15166.903 - 15229.318: 91.0493% ( 62) 00:14:57.002 15229.318 - 15291.733: 91.6080% ( 54) 00:14:57.002 15291.733 - 15354.149: 92.0323% ( 41) 00:14:57.002 15354.149 - 15416.564: 92.4151% ( 37) 00:14:57.002 15416.564 - 15478.979: 92.7877% ( 36) 00:14:57.002 15478.979 - 15541.394: 93.1188% ( 32) 00:14:57.002 15541.394 - 15603.810: 93.3568% ( 23) 00:14:57.002 15603.810 - 15666.225: 93.5844% ( 22) 00:14:57.002 15666.225 - 15728.640: 93.8017% ( 21) 00:14:57.002 15728.640 - 15791.055: 94.0811% ( 27) 00:14:57.002 15791.055 - 15853.470: 94.3502% ( 26) 00:14:57.002 15853.470 - 15915.886: 94.6089% ( 25) 00:14:57.002 15915.886 - 15978.301: 94.7848% ( 17) 00:14:57.002 15978.301 - 16103.131: 95.0228% ( 23) 00:14:57.002 16103.131 - 16227.962: 95.1366% ( 11) 00:14:57.002 16227.962 - 16352.792: 95.2090% ( 7) 00:14:57.002 16352.792 - 16477.623: 95.2815% ( 7) 00:14:57.002 16477.623 - 16602.453: 95.3435% ( 6) 00:14:57.002 16602.453 - 16727.284: 95.3953% ( 5) 00:14:57.002 16727.284 - 16852.114: 95.8195% ( 41) 00:14:57.002 16852.114 - 16976.945: 96.2438% ( 41) 00:14:57.002 16976.945 - 17101.775: 96.6370% ( 38) 00:14:57.002 17101.775 - 17226.606: 96.8750% ( 23) 00:14:57.002 17226.606 - 17351.436: 97.1130% ( 23) 00:14:57.002 17351.436 - 17476.267: 97.3406% ( 22) 00:14:57.002 17476.267 - 17601.097: 97.5683% ( 22) 00:14:57.002 17601.097 - 17725.928: 97.7752% ( 20) 00:14:57.002 17725.928 - 17850.758: 97.8787% ( 10) 00:14:57.002 17850.758 - 17975.589: 97.9719% ( 9) 00:14:57.002 17975.589 - 18100.419: 98.0339% ( 6) 00:14:57.002 18100.419 - 18225.250: 98.1167% ( 8) 00:14:57.002 18225.250 - 18350.080: 98.1685% ( 5) 00:14:57.002 18350.080 - 18474.910: 98.2099% ( 4) 00:14:57.002 18474.910 - 18599.741: 98.2616% ( 5) 00:14:57.002 18599.741 - 18724.571: 98.3133% ( 5) 00:14:57.002 18724.571 - 18849.402: 98.3547% ( 4) 00:14:57.002 18849.402 - 18974.232: 98.4272% ( 7) 00:14:57.002 18974.232 - 19099.063: 98.4789% ( 5) 00:14:57.002 19099.063 - 19223.893: 98.5203% ( 4) 00:14:57.002 19223.893 - 19348.724: 98.5824% ( 6) 00:14:57.002 19348.724 - 19473.554: 98.6031% ( 2) 00:14:57.002 19473.554 - 19598.385: 98.6238% ( 2) 00:14:57.002 19598.385 - 19723.215: 98.6548% ( 3) 00:14:57.002 19723.215 - 19848.046: 98.6755% ( 2) 00:14:57.002 19972.876 - 20097.707: 98.6962% ( 2) 00:14:57.002 20097.707 - 20222.537: 98.7479% ( 5) 00:14:57.002 20222.537 - 20347.368: 98.7893% ( 4) 00:14:57.002 20347.368 - 20472.198: 98.8411% ( 5) 00:14:57.002 20472.198 - 20597.029: 98.8825% ( 4) 00:14:57.002 20597.029 - 20721.859: 98.9238% ( 4) 00:14:57.002 20721.859 - 20846.690: 98.9756% ( 5) 00:14:57.002 20846.690 - 20971.520: 99.0066% ( 3) 00:14:57.002 20971.520 - 21096.350: 99.0480% ( 4) 00:14:57.002 21096.350 - 21221.181: 99.0998% ( 5) 00:14:57.002 21221.181 - 21346.011: 99.1411% ( 4) 00:14:57.002 21346.011 - 21470.842: 99.1929% ( 5) 00:14:57.002 21470.842 - 21595.672: 99.2343% ( 4) 00:14:57.002 21595.672 - 21720.503: 99.2860% ( 5) 00:14:57.002 21720.503 - 21845.333: 99.3274% ( 4) 00:14:57.002 21845.333 - 21970.164: 99.3377% ( 1) 00:14:57.002 27462.705 - 27587.535: 99.3688% ( 3) 00:14:57.002 27587.535 - 27712.366: 99.4102% ( 4) 00:14:57.002 27712.366 - 27837.196: 99.4516% ( 4) 00:14:57.002 27837.196 - 27962.027: 99.4930% ( 4) 00:14:57.002 27962.027 - 28086.857: 99.5344% ( 4) 00:14:57.002 28086.857 - 28211.688: 99.5861% ( 5) 00:14:57.002 28211.688 - 28336.518: 99.6275% ( 4) 00:14:57.002 28336.518 - 28461.349: 99.6689% ( 4) 00:14:57.002 28461.349 - 28586.179: 99.7103% ( 4) 00:14:57.002 28586.179 - 28711.010: 99.7517% ( 4) 00:14:57.002 28711.010 - 28835.840: 99.7930% ( 4) 00:14:57.002 28835.840 - 28960.670: 99.8344% ( 4) 00:14:57.002 28960.670 - 29085.501: 99.8758% ( 4) 00:14:57.002 29085.501 - 29210.331: 99.9069% ( 3) 00:14:57.002 29210.331 - 29335.162: 99.9483% ( 4) 00:14:57.002 29335.162 - 29459.992: 100.0000% ( 5) 00:14:57.002 00:14:57.002 13:48:43 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:57.002 00:14:57.002 real 0m2.737s 00:14:57.002 user 0m2.324s 00:14:57.002 sys 0m0.308s 00:14:57.002 13:48:43 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.002 13:48:43 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:57.002 ************************************ 00:14:57.002 END TEST nvme_perf 00:14:57.002 ************************************ 00:14:57.002 13:48:43 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:57.002 13:48:43 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:57.002 13:48:43 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.002 13:48:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.002 ************************************ 00:14:57.002 START TEST nvme_hello_world 00:14:57.002 ************************************ 00:14:57.002 13:48:43 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:57.261 Initializing NVMe Controllers 00:14:57.261 Attached to 0000:00:10.0 00:14:57.261 Namespace ID: 1 size: 6GB 00:14:57.261 Attached to 0000:00:11.0 00:14:57.261 Namespace ID: 1 size: 5GB 00:14:57.261 Attached to 0000:00:13.0 00:14:57.261 Namespace ID: 1 size: 1GB 00:14:57.261 Attached to 0000:00:12.0 00:14:57.261 Namespace ID: 1 size: 4GB 00:14:57.261 Namespace ID: 2 size: 4GB 00:14:57.261 Namespace ID: 3 size: 4GB 00:14:57.261 Initialization complete. 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 INFO: using host memory buffer for IO 00:14:57.261 Hello world! 00:14:57.261 ************************************ 00:14:57.261 END TEST nvme_hello_world 00:14:57.261 ************************************ 00:14:57.261 00:14:57.261 real 0m0.360s 00:14:57.261 user 0m0.139s 00:14:57.261 sys 0m0.173s 00:14:57.261 13:48:44 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.261 13:48:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:57.520 13:48:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:57.520 13:48:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:57.520 13:48:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.520 13:48:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.520 ************************************ 00:14:57.520 START TEST nvme_sgl 00:14:57.520 ************************************ 00:14:57.520 13:48:44 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:57.779 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:57.779 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:57.779 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:57.779 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:57.779 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:57.779 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:57.779 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:57.779 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:57.779 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:57.779 NVMe Readv/Writev Request test 00:14:57.779 Attached to 0000:00:10.0 00:14:57.779 Attached to 0000:00:11.0 00:14:57.779 Attached to 0000:00:13.0 00:14:57.779 Attached to 0000:00:12.0 00:14:57.779 0000:00:10.0: build_io_request_2 test passed 00:14:57.779 0000:00:10.0: build_io_request_4 test passed 00:14:57.779 0000:00:10.0: build_io_request_5 test passed 00:14:57.779 0000:00:10.0: build_io_request_6 test passed 00:14:57.779 0000:00:10.0: build_io_request_7 test passed 00:14:57.779 0000:00:10.0: build_io_request_10 test passed 00:14:57.779 0000:00:11.0: build_io_request_2 test passed 00:14:57.779 0000:00:11.0: build_io_request_4 test passed 00:14:57.779 0000:00:11.0: build_io_request_5 test passed 00:14:57.779 0000:00:11.0: build_io_request_6 test passed 00:14:57.779 0000:00:11.0: build_io_request_7 test passed 00:14:57.779 0000:00:11.0: build_io_request_10 test passed 00:14:57.779 Cleaning up... 00:14:57.779 ************************************ 00:14:57.779 END TEST nvme_sgl 00:14:57.779 ************************************ 00:14:57.779 00:14:57.779 real 0m0.477s 00:14:57.779 user 0m0.234s 00:14:57.779 sys 0m0.197s 00:14:57.779 13:48:44 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.779 13:48:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:58.038 13:48:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:58.038 13:48:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:58.038 13:48:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.038 13:48:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.038 ************************************ 00:14:58.038 START TEST nvme_e2edp 00:14:58.038 ************************************ 00:14:58.038 13:48:44 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:58.296 NVMe Write/Read with End-to-End data protection test 00:14:58.296 Attached to 0000:00:10.0 00:14:58.296 Attached to 0000:00:11.0 00:14:58.296 Attached to 0000:00:13.0 00:14:58.296 Attached to 0000:00:12.0 00:14:58.296 Cleaning up... 00:14:58.296 ************************************ 00:14:58.296 END TEST nvme_e2edp 00:14:58.296 ************************************ 00:14:58.296 00:14:58.296 real 0m0.382s 00:14:58.296 user 0m0.142s 00:14:58.296 sys 0m0.194s 00:14:58.296 13:48:45 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.296 13:48:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:58.296 13:48:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:58.296 13:48:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:58.296 13:48:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.296 13:48:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.296 ************************************ 00:14:58.296 START TEST nvme_reserve 00:14:58.296 ************************************ 00:14:58.296 13:48:45 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:58.862 ===================================================== 00:14:58.862 NVMe Controller at PCI bus 0, device 16, function 0 00:14:58.862 ===================================================== 00:14:58.862 Reservations: Not Supported 00:14:58.862 ===================================================== 00:14:58.862 NVMe Controller at PCI bus 0, device 17, function 0 00:14:58.862 ===================================================== 00:14:58.862 Reservations: Not Supported 00:14:58.862 ===================================================== 00:14:58.862 NVMe Controller at PCI bus 0, device 19, function 0 00:14:58.862 ===================================================== 00:14:58.862 Reservations: Not Supported 00:14:58.862 ===================================================== 00:14:58.862 NVMe Controller at PCI bus 0, device 18, function 0 00:14:58.862 ===================================================== 00:14:58.862 Reservations: Not Supported 00:14:58.862 Reservation test passed 00:14:58.862 ************************************ 00:14:58.862 END TEST nvme_reserve 00:14:58.862 ************************************ 00:14:58.862 00:14:58.862 real 0m0.397s 00:14:58.862 user 0m0.151s 00:14:58.862 sys 0m0.202s 00:14:58.862 13:48:45 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.862 13:48:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:58.862 13:48:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:58.862 13:48:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:58.862 13:48:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.862 13:48:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.862 ************************************ 00:14:58.862 START TEST nvme_err_injection 00:14:58.862 ************************************ 00:14:58.862 13:48:45 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:59.122 NVMe Error Injection test 00:14:59.122 Attached to 0000:00:10.0 00:14:59.122 Attached to 0000:00:11.0 00:14:59.122 Attached to 0000:00:13.0 00:14:59.122 Attached to 0000:00:12.0 00:14:59.122 0000:00:10.0: get features failed as expected 00:14:59.122 0000:00:11.0: get features failed as expected 00:14:59.122 0000:00:13.0: get features failed as expected 00:14:59.122 0000:00:12.0: get features failed as expected 00:14:59.122 0000:00:13.0: get features successfully as expected 00:14:59.122 0000:00:12.0: get features successfully as expected 00:14:59.122 0000:00:10.0: get features successfully as expected 00:14:59.122 0000:00:11.0: get features successfully as expected 00:14:59.122 0000:00:11.0: read failed as expected 00:14:59.122 0000:00:13.0: read failed as expected 00:14:59.122 0000:00:12.0: read failed as expected 00:14:59.122 0000:00:10.0: read failed as expected 00:14:59.122 0000:00:10.0: read successfully as expected 00:14:59.122 0000:00:11.0: read successfully as expected 00:14:59.122 0000:00:13.0: read successfully as expected 00:14:59.122 0000:00:12.0: read successfully as expected 00:14:59.122 Cleaning up... 00:14:59.122 ************************************ 00:14:59.122 END TEST nvme_err_injection 00:14:59.122 ************************************ 00:14:59.122 00:14:59.122 real 0m0.402s 00:14:59.122 user 0m0.176s 00:14:59.122 sys 0m0.180s 00:14:59.122 13:48:46 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:59.122 13:48:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:59.380 13:48:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:59.380 13:48:46 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:14:59.380 13:48:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:59.380 13:48:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.380 ************************************ 00:14:59.380 START TEST nvme_overhead 00:14:59.380 ************************************ 00:14:59.380 13:48:46 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:15:00.756 Initializing NVMe Controllers 00:15:00.756 Attached to 0000:00:10.0 00:15:00.756 Attached to 0000:00:11.0 00:15:00.756 Attached to 0000:00:13.0 00:15:00.756 Attached to 0000:00:12.0 00:15:00.756 Initialization complete. Launching workers. 00:15:00.757 submit (in ns) avg, min, max = 15213.6, 12114.3, 87301.9 00:15:00.757 complete (in ns) avg, min, max = 10469.7, 7885.7, 693669.5 00:15:00.757 00:15:00.757 Submit histogram 00:15:00.757 ================ 00:15:00.757 Range in us Cumulative Count 00:15:00.757 12.069 - 12.130: 0.0441% ( 4) 00:15:00.757 12.130 - 12.190: 0.2646% ( 20) 00:15:00.757 12.190 - 12.251: 1.0362% ( 70) 00:15:00.757 12.251 - 12.312: 2.4802% ( 131) 00:15:00.757 12.312 - 12.373: 4.2328% ( 159) 00:15:00.757 12.373 - 12.434: 6.2169% ( 180) 00:15:00.757 12.434 - 12.495: 8.0026% ( 162) 00:15:00.757 12.495 - 12.556: 9.7773% ( 161) 00:15:00.757 12.556 - 12.617: 11.5631% ( 162) 00:15:00.757 12.617 - 12.678: 12.9519% ( 126) 00:15:00.757 12.678 - 12.739: 14.5944% ( 149) 00:15:00.757 12.739 - 12.800: 15.8179% ( 111) 00:15:00.757 12.800 - 12.861: 16.6446% ( 75) 00:15:00.757 12.861 - 12.922: 17.2950% ( 59) 00:15:00.757 12.922 - 12.983: 17.8131% ( 47) 00:15:00.757 12.983 - 13.044: 18.3752% ( 51) 00:15:00.757 13.044 - 13.105: 19.0697% ( 63) 00:15:00.757 13.105 - 13.166: 20.7892% ( 156) 00:15:00.757 13.166 - 13.227: 23.1592% ( 215) 00:15:00.757 13.227 - 13.288: 25.7165% ( 232) 00:15:00.757 13.288 - 13.349: 28.2187% ( 227) 00:15:00.757 13.349 - 13.410: 31.2500% ( 275) 00:15:00.757 13.410 - 13.470: 34.1160% ( 260) 00:15:00.757 13.470 - 13.531: 37.1362% ( 274) 00:15:00.757 13.531 - 13.592: 40.2447% ( 282) 00:15:00.757 13.592 - 13.653: 42.9453% ( 245) 00:15:00.757 13.653 - 13.714: 44.8192% ( 170) 00:15:00.757 13.714 - 13.775: 46.3073% ( 135) 00:15:00.757 13.775 - 13.836: 47.8616% ( 141) 00:15:00.757 13.836 - 13.897: 49.9559% ( 190) 00:15:00.757 13.897 - 13.958: 52.6124% ( 241) 00:15:00.757 13.958 - 14.019: 55.2028% ( 235) 00:15:00.757 14.019 - 14.080: 57.2751% ( 188) 00:15:00.757 14.080 - 14.141: 59.2041% ( 175) 00:15:00.757 14.141 - 14.202: 60.8907% ( 153) 00:15:00.757 14.202 - 14.263: 63.2496% ( 214) 00:15:00.757 14.263 - 14.324: 64.8258% ( 143) 00:15:00.757 14.324 - 14.385: 66.7218% ( 172) 00:15:00.757 14.385 - 14.446: 68.0556% ( 121) 00:15:00.757 14.446 - 14.507: 69.0807% ( 93) 00:15:00.757 14.507 - 14.568: 69.9184% ( 76) 00:15:00.757 14.568 - 14.629: 70.6459% ( 66) 00:15:00.757 14.629 - 14.690: 71.4506% ( 73) 00:15:00.757 14.690 - 14.750: 72.0018% ( 50) 00:15:00.757 14.750 - 14.811: 72.3214% ( 29) 00:15:00.757 14.811 - 14.872: 72.6852% ( 33) 00:15:00.757 14.872 - 14.933: 73.0489% ( 33) 00:15:00.757 14.933 - 14.994: 73.2914% ( 22) 00:15:00.757 14.994 - 15.055: 73.6111% ( 29) 00:15:00.757 15.055 - 15.116: 73.7654% ( 14) 00:15:00.757 15.116 - 15.177: 73.9528% ( 17) 00:15:00.757 15.177 - 15.238: 74.0520% ( 9) 00:15:00.757 15.238 - 15.299: 74.1623% ( 10) 00:15:00.757 15.299 - 15.360: 74.2284% ( 6) 00:15:00.757 15.360 - 15.421: 74.2945% ( 6) 00:15:00.757 15.421 - 15.482: 74.3496% ( 5) 00:15:00.757 15.482 - 15.543: 74.4048% ( 5) 00:15:00.757 15.543 - 15.604: 74.4378% ( 3) 00:15:00.757 15.604 - 15.726: 74.4929% ( 5) 00:15:00.757 15.726 - 15.848: 74.5260% ( 3) 00:15:00.757 15.848 - 15.970: 74.5811% ( 5) 00:15:00.757 15.970 - 16.091: 74.6362% ( 5) 00:15:00.757 16.091 - 16.213: 74.6803% ( 4) 00:15:00.757 16.213 - 16.335: 74.7354% ( 5) 00:15:00.757 16.335 - 16.457: 74.7795% ( 4) 00:15:00.757 16.457 - 16.579: 74.8347% ( 5) 00:15:00.757 16.579 - 16.701: 74.8567% ( 2) 00:15:00.757 16.701 - 16.823: 74.8898% ( 3) 00:15:00.757 16.823 - 16.945: 74.9118% ( 2) 00:15:00.757 16.945 - 17.067: 74.9228% ( 1) 00:15:00.757 17.067 - 17.189: 74.9559% ( 3) 00:15:00.757 17.189 - 17.310: 75.0331% ( 7) 00:15:00.757 17.310 - 17.432: 75.0772% ( 4) 00:15:00.757 17.432 - 17.554: 75.1653% ( 8) 00:15:00.757 17.554 - 17.676: 75.2315% ( 6) 00:15:00.757 17.676 - 17.798: 75.3197% ( 8) 00:15:00.757 17.798 - 17.920: 75.4299% ( 10) 00:15:00.757 17.920 - 18.042: 75.5401% ( 10) 00:15:00.757 18.042 - 18.164: 75.6283% ( 8) 00:15:00.757 18.164 - 18.286: 75.7496% ( 11) 00:15:00.757 18.286 - 18.408: 75.8598% ( 10) 00:15:00.757 18.408 - 18.530: 76.1023% ( 22) 00:15:00.757 18.530 - 18.651: 77.5683% ( 133) 00:15:00.757 18.651 - 18.773: 81.6138% ( 367) 00:15:00.757 18.773 - 18.895: 85.7914% ( 379) 00:15:00.757 18.895 - 19.017: 88.7787% ( 271) 00:15:00.757 19.017 - 19.139: 90.6636% ( 171) 00:15:00.757 19.139 - 19.261: 91.9643% ( 118) 00:15:00.757 19.261 - 19.383: 93.0225% ( 96) 00:15:00.757 19.383 - 19.505: 93.7831% ( 69) 00:15:00.757 19.505 - 19.627: 94.2240% ( 40) 00:15:00.757 19.627 - 19.749: 94.6318% ( 37) 00:15:00.757 19.749 - 19.870: 94.9405% ( 28) 00:15:00.757 19.870 - 19.992: 95.1168% ( 16) 00:15:00.757 19.992 - 20.114: 95.2381% ( 11) 00:15:00.757 20.114 - 20.236: 95.3483% ( 10) 00:15:00.757 20.236 - 20.358: 95.4586% ( 10) 00:15:00.757 20.358 - 20.480: 95.5247% ( 6) 00:15:00.757 20.480 - 20.602: 95.6349% ( 10) 00:15:00.757 20.602 - 20.724: 95.7892% ( 14) 00:15:00.757 20.724 - 20.846: 95.9105% ( 11) 00:15:00.757 20.846 - 20.968: 96.0097% ( 9) 00:15:00.757 20.968 - 21.090: 96.0538% ( 4) 00:15:00.757 21.090 - 21.211: 96.1089% ( 5) 00:15:00.757 21.211 - 21.333: 96.1971% ( 8) 00:15:00.757 21.333 - 21.455: 96.2632% ( 6) 00:15:00.757 21.455 - 21.577: 96.2963% ( 3) 00:15:00.757 21.577 - 21.699: 96.3955% ( 9) 00:15:00.757 21.699 - 21.821: 96.4506% ( 5) 00:15:00.757 21.821 - 21.943: 96.5168% ( 6) 00:15:00.757 21.943 - 22.065: 96.5278% ( 1) 00:15:00.757 22.065 - 22.187: 96.5939% ( 6) 00:15:00.757 22.187 - 22.309: 96.6049% ( 1) 00:15:00.757 22.309 - 22.430: 96.6160% ( 1) 00:15:00.757 22.430 - 22.552: 96.6380% ( 2) 00:15:00.757 22.552 - 22.674: 96.6490% ( 1) 00:15:00.757 22.674 - 22.796: 96.6711% ( 2) 00:15:00.757 22.796 - 22.918: 96.6931% ( 2) 00:15:00.757 22.918 - 23.040: 96.7262% ( 3) 00:15:00.757 23.040 - 23.162: 96.7482% ( 2) 00:15:00.757 23.162 - 23.284: 96.7923% ( 4) 00:15:00.757 23.284 - 23.406: 96.8144% ( 2) 00:15:00.757 23.406 - 23.528: 96.8474% ( 3) 00:15:00.757 23.528 - 23.650: 96.8585% ( 1) 00:15:00.757 23.650 - 23.771: 96.9246% ( 6) 00:15:00.757 23.771 - 23.893: 96.9356% ( 1) 00:15:00.757 23.893 - 24.015: 96.9907% ( 5) 00:15:00.757 24.015 - 24.137: 97.0238% ( 3) 00:15:00.757 24.137 - 24.259: 97.0348% ( 1) 00:15:00.757 24.259 - 24.381: 97.0679% ( 3) 00:15:00.757 24.381 - 24.503: 97.1230% ( 5) 00:15:00.757 24.503 - 24.625: 97.1892% ( 6) 00:15:00.757 24.625 - 24.747: 97.2773% ( 8) 00:15:00.757 24.747 - 24.869: 97.3325% ( 5) 00:15:00.757 24.869 - 24.990: 97.3986% ( 6) 00:15:00.757 24.990 - 25.112: 97.4757% ( 7) 00:15:00.757 25.112 - 25.234: 97.6301% ( 14) 00:15:00.757 25.234 - 25.356: 97.6962% ( 6) 00:15:00.757 25.356 - 25.478: 97.8505% ( 14) 00:15:00.757 25.478 - 25.600: 97.9497% ( 9) 00:15:00.757 25.600 - 25.722: 98.1041% ( 14) 00:15:00.757 25.722 - 25.844: 98.2033% ( 9) 00:15:00.757 25.844 - 25.966: 98.2804% ( 7) 00:15:00.757 25.966 - 26.088: 98.3135% ( 3) 00:15:00.757 26.088 - 26.210: 98.3907% ( 7) 00:15:00.757 26.210 - 26.331: 98.4347% ( 4) 00:15:00.757 26.331 - 26.453: 98.4458% ( 1) 00:15:00.757 26.453 - 26.575: 98.4678% ( 2) 00:15:00.757 26.575 - 26.697: 98.5009% ( 3) 00:15:00.757 26.697 - 26.819: 98.5229% ( 2) 00:15:00.757 26.941 - 27.063: 98.5560% ( 3) 00:15:00.757 27.063 - 27.185: 98.6221% ( 6) 00:15:00.757 27.185 - 27.307: 98.6662% ( 4) 00:15:00.757 27.429 - 27.550: 98.6883% ( 2) 00:15:00.757 27.550 - 27.672: 98.7103% ( 2) 00:15:00.757 27.672 - 27.794: 98.7324% ( 2) 00:15:00.757 27.794 - 27.916: 98.7434% ( 1) 00:15:00.757 27.916 - 28.038: 98.7544% ( 1) 00:15:00.757 28.038 - 28.160: 98.7654% ( 1) 00:15:00.757 28.160 - 28.282: 98.7985% ( 3) 00:15:00.757 28.282 - 28.404: 98.8426% ( 4) 00:15:00.757 28.404 - 28.526: 98.8867% ( 4) 00:15:00.757 28.526 - 28.648: 98.9087% ( 2) 00:15:00.757 28.648 - 28.770: 98.9198% ( 1) 00:15:00.757 28.770 - 28.891: 98.9528% ( 3) 00:15:00.757 28.891 - 29.013: 98.9749% ( 2) 00:15:00.757 29.013 - 29.135: 99.0300% ( 5) 00:15:00.757 29.135 - 29.257: 99.0961% ( 6) 00:15:00.757 29.257 - 29.379: 99.1402% ( 4) 00:15:00.757 29.379 - 29.501: 99.1843% ( 4) 00:15:00.757 29.501 - 29.623: 99.2394% ( 5) 00:15:00.757 29.623 - 29.745: 99.3056% ( 6) 00:15:00.757 29.745 - 29.867: 99.3496% ( 4) 00:15:00.757 29.867 - 29.989: 99.4268% ( 7) 00:15:00.757 29.989 - 30.110: 99.4819% ( 5) 00:15:00.757 30.110 - 30.232: 99.5150% ( 3) 00:15:00.757 30.232 - 30.354: 99.5481% ( 3) 00:15:00.757 30.476 - 30.598: 99.5591% ( 1) 00:15:00.757 30.598 - 30.720: 99.5922% ( 3) 00:15:00.757 30.720 - 30.842: 99.6032% ( 1) 00:15:00.757 30.842 - 30.964: 99.6142% ( 1) 00:15:00.757 30.964 - 31.086: 99.6252% ( 1) 00:15:00.757 31.086 - 31.208: 99.6583% ( 3) 00:15:00.757 31.208 - 31.451: 99.6803% ( 2) 00:15:00.757 31.451 - 31.695: 99.6914% ( 1) 00:15:00.758 31.695 - 31.939: 99.7134% ( 2) 00:15:00.758 31.939 - 32.183: 99.7244% ( 1) 00:15:00.758 32.670 - 32.914: 99.7354% ( 1) 00:15:00.758 32.914 - 33.158: 99.7465% ( 1) 00:15:00.758 33.402 - 33.646: 99.7575% ( 1) 00:15:00.758 33.890 - 34.133: 99.7795% ( 2) 00:15:00.758 34.865 - 35.109: 99.7906% ( 1) 00:15:00.758 35.596 - 35.840: 99.8016% ( 1) 00:15:00.758 36.084 - 36.328: 99.8347% ( 3) 00:15:00.758 36.328 - 36.571: 99.8457% ( 1) 00:15:00.758 36.571 - 36.815: 99.8567% ( 1) 00:15:00.758 37.059 - 37.303: 99.8677% ( 1) 00:15:00.758 38.034 - 38.278: 99.8787% ( 1) 00:15:00.758 38.522 - 38.766: 99.9008% ( 2) 00:15:00.758 39.253 - 39.497: 99.9118% ( 1) 00:15:00.758 40.716 - 40.960: 99.9228% ( 1) 00:15:00.758 43.154 - 43.398: 99.9339% ( 1) 00:15:00.758 44.373 - 44.617: 99.9449% ( 1) 00:15:00.758 44.617 - 44.861: 99.9559% ( 1) 00:15:00.758 45.105 - 45.349: 99.9669% ( 1) 00:15:00.758 47.543 - 47.787: 99.9780% ( 1) 00:15:00.758 47.787 - 48.030: 99.9890% ( 1) 00:15:00.758 87.284 - 87.771: 100.0000% ( 1) 00:15:00.758 00:15:00.758 Complete histogram 00:15:00.758 ================== 00:15:00.758 Range in us Cumulative Count 00:15:00.758 7.863 - 7.924: 0.1213% ( 11) 00:15:00.758 7.924 - 7.985: 1.2235% ( 100) 00:15:00.758 7.985 - 8.046: 2.6124% ( 126) 00:15:00.758 8.046 - 8.107: 3.4943% ( 80) 00:15:00.758 8.107 - 8.168: 4.1446% ( 59) 00:15:00.758 8.168 - 8.229: 4.8832% ( 67) 00:15:00.758 8.229 - 8.290: 5.2469% ( 33) 00:15:00.758 8.290 - 8.350: 5.6989% ( 41) 00:15:00.758 8.350 - 8.411: 7.5507% ( 168) 00:15:00.758 8.411 - 8.472: 9.3805% ( 166) 00:15:00.758 8.472 - 8.533: 11.3646% ( 180) 00:15:00.758 8.533 - 8.594: 13.3929% ( 184) 00:15:00.758 8.594 - 8.655: 15.8840% ( 226) 00:15:00.758 8.655 - 8.716: 20.5467% ( 423) 00:15:00.758 8.716 - 8.777: 24.8126% ( 387) 00:15:00.758 8.777 - 8.838: 26.9400% ( 193) 00:15:00.758 8.838 - 8.899: 28.4061% ( 133) 00:15:00.758 8.899 - 8.960: 29.5855% ( 107) 00:15:00.758 8.960 - 9.021: 30.5556% ( 88) 00:15:00.758 9.021 - 9.082: 31.8783% ( 120) 00:15:00.758 9.082 - 9.143: 34.6781% ( 254) 00:15:00.758 9.143 - 9.204: 37.7425% ( 278) 00:15:00.758 9.204 - 9.265: 42.7579% ( 455) 00:15:00.758 9.265 - 9.326: 46.8695% ( 373) 00:15:00.758 9.326 - 9.387: 50.0110% ( 285) 00:15:00.758 9.387 - 9.448: 52.3920% ( 216) 00:15:00.758 9.448 - 9.509: 54.3430% ( 177) 00:15:00.758 9.509 - 9.570: 55.7760% ( 130) 00:15:00.758 9.570 - 9.630: 57.1869% ( 128) 00:15:00.758 9.630 - 9.691: 59.5348% ( 213) 00:15:00.758 9.691 - 9.752: 62.0481% ( 228) 00:15:00.758 9.752 - 9.813: 64.4951% ( 222) 00:15:00.758 9.813 - 9.874: 66.8871% ( 217) 00:15:00.758 9.874 - 9.935: 68.5516% ( 151) 00:15:00.758 9.935 - 9.996: 69.9846% ( 130) 00:15:00.758 9.996 - 10.057: 71.1750% ( 108) 00:15:00.758 10.057 - 10.118: 72.2332% ( 96) 00:15:00.758 10.118 - 10.179: 73.1151% ( 80) 00:15:00.758 10.179 - 10.240: 73.8205% ( 64) 00:15:00.758 10.240 - 10.301: 74.1512% ( 30) 00:15:00.758 10.301 - 10.362: 74.5591% ( 37) 00:15:00.758 10.362 - 10.423: 74.7906% ( 21) 00:15:00.758 10.423 - 10.484: 74.9669% ( 16) 00:15:00.758 10.484 - 10.545: 75.1433% ( 16) 00:15:00.758 10.545 - 10.606: 75.1984% ( 5) 00:15:00.758 10.606 - 10.667: 75.2646% ( 6) 00:15:00.758 10.667 - 10.728: 75.3197% ( 5) 00:15:00.758 10.728 - 10.789: 75.3858% ( 6) 00:15:00.758 10.789 - 10.850: 75.4189% ( 3) 00:15:00.758 10.850 - 10.910: 75.4850% ( 6) 00:15:00.758 10.910 - 10.971: 75.5511% ( 6) 00:15:00.758 10.971 - 11.032: 75.6173% ( 6) 00:15:00.758 11.032 - 11.093: 75.6614% ( 4) 00:15:00.758 11.093 - 11.154: 75.6724% ( 1) 00:15:00.758 11.154 - 11.215: 75.7385% ( 6) 00:15:00.758 11.215 - 11.276: 75.7716% ( 3) 00:15:00.758 11.276 - 11.337: 75.7826% ( 1) 00:15:00.758 11.398 - 11.459: 75.8377% ( 5) 00:15:00.758 11.459 - 11.520: 75.8598% ( 2) 00:15:00.758 11.520 - 11.581: 75.8818% ( 2) 00:15:00.758 11.703 - 11.764: 75.8929% ( 1) 00:15:00.758 11.764 - 11.825: 75.9369% ( 4) 00:15:00.758 11.886 - 11.947: 75.9590% ( 2) 00:15:00.758 12.008 - 12.069: 75.9700% ( 1) 00:15:00.758 12.251 - 12.312: 76.0031% ( 3) 00:15:00.758 12.312 - 12.373: 76.0141% ( 1) 00:15:00.758 12.373 - 12.434: 76.0251% ( 1) 00:15:00.758 12.434 - 12.495: 76.0692% ( 4) 00:15:00.758 12.495 - 12.556: 77.0723% ( 91) 00:15:00.758 12.556 - 12.617: 80.6768% ( 327) 00:15:00.758 12.617 - 12.678: 84.9647% ( 389) 00:15:00.758 12.678 - 12.739: 87.7976% ( 257) 00:15:00.758 12.739 - 12.800: 90.0904% ( 208) 00:15:00.758 12.800 - 12.861: 91.2478% ( 105) 00:15:00.758 12.861 - 12.922: 92.0194% ( 70) 00:15:00.758 12.922 - 12.983: 92.6146% ( 54) 00:15:00.758 12.983 - 13.044: 93.1658% ( 50) 00:15:00.758 13.044 - 13.105: 93.6839% ( 47) 00:15:00.758 13.105 - 13.166: 94.0366% ( 32) 00:15:00.758 13.166 - 13.227: 94.3232% ( 26) 00:15:00.758 13.227 - 13.288: 94.4665% ( 13) 00:15:00.758 13.288 - 13.349: 94.6208% ( 14) 00:15:00.758 13.349 - 13.410: 94.7310% ( 10) 00:15:00.758 13.410 - 13.470: 94.7972% ( 6) 00:15:00.758 13.470 - 13.531: 94.8743% ( 7) 00:15:00.758 13.531 - 13.592: 94.9074% ( 3) 00:15:00.758 13.592 - 13.653: 94.9295% ( 2) 00:15:00.758 13.653 - 13.714: 94.9735% ( 4) 00:15:00.758 13.714 - 13.775: 95.0066% ( 3) 00:15:00.758 13.775 - 13.836: 95.0287% ( 2) 00:15:00.758 13.836 - 13.897: 95.0617% ( 3) 00:15:00.758 13.897 - 13.958: 95.0948% ( 3) 00:15:00.758 13.958 - 14.019: 95.1830% ( 8) 00:15:00.758 14.080 - 14.141: 95.2381% ( 5) 00:15:00.758 14.141 - 14.202: 95.2932% ( 5) 00:15:00.758 14.202 - 14.263: 95.3593% ( 6) 00:15:00.758 14.263 - 14.324: 95.4255% ( 6) 00:15:00.758 14.324 - 14.385: 95.5026% ( 7) 00:15:00.758 14.385 - 14.446: 95.5247% ( 2) 00:15:00.758 14.446 - 14.507: 95.5908% ( 6) 00:15:00.758 14.507 - 14.568: 95.6239% ( 3) 00:15:00.758 14.568 - 14.629: 95.6570% ( 3) 00:15:00.758 14.629 - 14.690: 95.6900% ( 3) 00:15:00.758 14.690 - 14.750: 95.7672% ( 7) 00:15:00.758 14.750 - 14.811: 95.7782% ( 1) 00:15:00.758 14.811 - 14.872: 95.8554% ( 7) 00:15:00.758 14.872 - 14.933: 95.8664% ( 1) 00:15:00.758 14.933 - 14.994: 95.9325% ( 6) 00:15:00.758 14.994 - 15.055: 95.9766% ( 4) 00:15:00.758 15.055 - 15.116: 96.0428% ( 6) 00:15:00.758 15.116 - 15.177: 96.0869% ( 4) 00:15:00.758 15.177 - 15.238: 96.1750% ( 8) 00:15:00.758 15.238 - 15.299: 96.1861% ( 1) 00:15:00.758 15.299 - 15.360: 96.1971% ( 1) 00:15:00.758 15.360 - 15.421: 96.2081% ( 1) 00:15:00.758 15.421 - 15.482: 96.2191% ( 1) 00:15:00.758 15.482 - 15.543: 96.2522% ( 3) 00:15:00.758 15.543 - 15.604: 96.2853% ( 3) 00:15:00.758 15.604 - 15.726: 96.3845% ( 9) 00:15:00.758 15.726 - 15.848: 96.4616% ( 7) 00:15:00.758 15.848 - 15.970: 96.5608% ( 9) 00:15:00.758 15.970 - 16.091: 96.6821% ( 11) 00:15:00.758 16.091 - 16.213: 96.7152% ( 3) 00:15:00.758 16.213 - 16.335: 96.7813% ( 6) 00:15:00.758 16.335 - 16.457: 96.8254% ( 4) 00:15:00.758 16.457 - 16.579: 96.9356% ( 10) 00:15:00.758 16.579 - 16.701: 96.9797% ( 4) 00:15:00.758 16.701 - 16.823: 97.0238% ( 4) 00:15:00.758 16.945 - 17.067: 97.1010% ( 7) 00:15:00.758 17.067 - 17.189: 97.1892% ( 8) 00:15:00.758 17.189 - 17.310: 97.2222% ( 3) 00:15:00.758 17.310 - 17.432: 97.2663% ( 4) 00:15:00.758 17.432 - 17.554: 97.2994% ( 3) 00:15:00.758 17.554 - 17.676: 97.3214% ( 2) 00:15:00.758 17.676 - 17.798: 97.3545% ( 3) 00:15:00.758 17.798 - 17.920: 97.3655% ( 1) 00:15:00.758 17.920 - 18.042: 97.3876% ( 2) 00:15:00.758 18.042 - 18.164: 97.4096% ( 2) 00:15:00.758 18.286 - 18.408: 97.4206% ( 1) 00:15:00.758 18.408 - 18.530: 97.4317% ( 1) 00:15:00.758 18.530 - 18.651: 97.4427% ( 1) 00:15:00.758 18.651 - 18.773: 97.4537% ( 1) 00:15:00.758 19.139 - 19.261: 97.4757% ( 2) 00:15:00.758 19.261 - 19.383: 97.4868% ( 1) 00:15:00.758 19.627 - 19.749: 97.4978% ( 1) 00:15:00.758 19.870 - 19.992: 97.5088% ( 1) 00:15:00.758 19.992 - 20.114: 97.5419% ( 3) 00:15:00.758 20.114 - 20.236: 97.5529% ( 1) 00:15:00.758 20.236 - 20.358: 97.6080% ( 5) 00:15:00.758 20.358 - 20.480: 97.6301% ( 2) 00:15:00.758 20.480 - 20.602: 97.7513% ( 11) 00:15:00.758 20.602 - 20.724: 97.8616% ( 10) 00:15:00.758 20.724 - 20.846: 97.9387% ( 7) 00:15:00.758 20.846 - 20.968: 98.0820% ( 13) 00:15:00.758 20.968 - 21.090: 98.2033% ( 11) 00:15:00.758 21.090 - 21.211: 98.2804% ( 7) 00:15:00.758 21.211 - 21.333: 98.3686% ( 8) 00:15:00.758 21.333 - 21.455: 98.4678% ( 9) 00:15:00.758 21.455 - 21.577: 98.5891% ( 11) 00:15:00.758 21.577 - 21.699: 98.6883% ( 9) 00:15:00.758 21.699 - 21.821: 98.7434% ( 5) 00:15:00.758 21.821 - 21.943: 98.7654% ( 2) 00:15:00.758 21.943 - 22.065: 98.8316% ( 6) 00:15:00.759 22.065 - 22.187: 98.8646% ( 3) 00:15:00.759 22.187 - 22.309: 98.9198% ( 5) 00:15:00.759 22.309 - 22.430: 98.9528% ( 3) 00:15:00.759 22.552 - 22.674: 98.9859% ( 3) 00:15:00.759 22.674 - 22.796: 98.9969% ( 1) 00:15:00.759 22.796 - 22.918: 99.0079% ( 1) 00:15:00.759 22.918 - 23.040: 99.0190% ( 1) 00:15:00.759 23.040 - 23.162: 99.0410% ( 2) 00:15:00.759 23.162 - 23.284: 99.0520% ( 1) 00:15:00.759 23.284 - 23.406: 99.0741% ( 2) 00:15:00.759 23.406 - 23.528: 99.0851% ( 1) 00:15:00.759 24.137 - 24.259: 99.0961% ( 1) 00:15:00.759 24.381 - 24.503: 99.1182% ( 2) 00:15:00.759 24.503 - 24.625: 99.1402% ( 2) 00:15:00.759 24.747 - 24.869: 99.1512% ( 1) 00:15:00.759 24.869 - 24.990: 99.1953% ( 4) 00:15:00.759 24.990 - 25.112: 99.2174% ( 2) 00:15:00.759 25.112 - 25.234: 99.2284% ( 1) 00:15:00.759 25.234 - 25.356: 99.2615% ( 3) 00:15:00.759 25.356 - 25.478: 99.2725% ( 1) 00:15:00.759 25.478 - 25.600: 99.3166% ( 4) 00:15:00.759 25.600 - 25.722: 99.3386% ( 2) 00:15:00.759 25.722 - 25.844: 99.3607% ( 2) 00:15:00.759 25.844 - 25.966: 99.4048% ( 4) 00:15:00.759 25.966 - 26.088: 99.4489% ( 4) 00:15:00.759 26.088 - 26.210: 99.5040% ( 5) 00:15:00.759 26.210 - 26.331: 99.5260% ( 2) 00:15:00.759 26.331 - 26.453: 99.5591% ( 3) 00:15:00.759 26.453 - 26.575: 99.6142% ( 5) 00:15:00.759 26.575 - 26.697: 99.6252% ( 1) 00:15:00.759 26.697 - 26.819: 99.6473% ( 2) 00:15:00.759 26.819 - 26.941: 99.6803% ( 3) 00:15:00.759 26.941 - 27.063: 99.7134% ( 3) 00:15:00.759 27.063 - 27.185: 99.7244% ( 1) 00:15:00.759 27.185 - 27.307: 99.7354% ( 1) 00:15:00.759 27.307 - 27.429: 99.7465% ( 1) 00:15:00.759 27.429 - 27.550: 99.7575% ( 1) 00:15:00.759 27.550 - 27.672: 99.7685% ( 1) 00:15:00.759 27.672 - 27.794: 99.7795% ( 1) 00:15:00.759 28.038 - 28.160: 99.8016% ( 2) 00:15:00.759 28.770 - 28.891: 99.8126% ( 1) 00:15:00.759 29.013 - 29.135: 99.8236% ( 1) 00:15:00.759 29.379 - 29.501: 99.8347% ( 1) 00:15:00.759 29.501 - 29.623: 99.8457% ( 1) 00:15:00.759 30.964 - 31.086: 99.8567% ( 1) 00:15:00.759 33.158 - 33.402: 99.8677% ( 1) 00:15:00.759 33.890 - 34.133: 99.8787% ( 1) 00:15:00.759 34.377 - 34.621: 99.8898% ( 1) 00:15:00.759 36.815 - 37.059: 99.9008% ( 1) 00:15:00.759 42.423 - 42.667: 99.9118% ( 1) 00:15:00.759 42.667 - 42.910: 99.9228% ( 1) 00:15:00.759 43.886 - 44.130: 99.9339% ( 1) 00:15:00.759 45.349 - 45.592: 99.9449% ( 1) 00:15:00.759 49.006 - 49.250: 99.9559% ( 1) 00:15:00.759 56.320 - 56.564: 99.9669% ( 1) 00:15:00.759 113.615 - 114.103: 99.9780% ( 1) 00:15:00.759 125.806 - 126.781: 99.9890% ( 1) 00:15:00.759 690.469 - 694.370: 100.0000% ( 1) 00:15:00.759 00:15:00.759 ************************************ 00:15:00.759 END TEST nvme_overhead 00:15:00.759 ************************************ 00:15:00.759 00:15:00.759 real 0m1.317s 00:15:00.759 user 0m1.113s 00:15:00.759 sys 0m0.149s 00:15:00.759 13:48:47 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:00.759 13:48:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:15:00.759 13:48:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:00.759 13:48:47 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:15:00.759 13:48:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:00.759 13:48:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.759 ************************************ 00:15:00.759 START TEST nvme_arbitration 00:15:00.759 ************************************ 00:15:00.759 13:48:47 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:15:04.083 Initializing NVMe Controllers 00:15:04.083 Attached to 0000:00:10.0 00:15:04.083 Attached to 0000:00:11.0 00:15:04.083 Attached to 0000:00:13.0 00:15:04.083 Attached to 0000:00:12.0 00:15:04.083 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:15:04.083 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:15:04.083 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:15:04.083 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:15:04.083 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:15:04.083 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:15:04.083 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:15:04.083 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:15:04.083 Initialization complete. Launching workers. 00:15:04.083 Starting thread on core 1 with urgent priority queue 00:15:04.083 Starting thread on core 2 with urgent priority queue 00:15:04.083 Starting thread on core 3 with urgent priority queue 00:15:04.083 Starting thread on core 0 with urgent priority queue 00:15:04.083 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:15:04.083 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:15:04.083 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:15:04.083 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:15:04.083 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:15:04.083 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:15:04.083 ======================================================== 00:15:04.083 00:15:04.083 ************************************ 00:15:04.083 END TEST nvme_arbitration 00:15:04.083 ************************************ 00:15:04.083 00:15:04.083 real 0m3.451s 00:15:04.083 user 0m9.485s 00:15:04.083 sys 0m0.149s 00:15:04.083 13:48:50 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.083 13:48:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:15:04.083 13:48:50 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:04.083 13:48:50 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:04.083 13:48:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.083 13:48:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.083 ************************************ 00:15:04.083 START TEST nvme_single_aen 00:15:04.083 ************************************ 00:15:04.083 13:48:50 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:15:04.341 Asynchronous Event Request test 00:15:04.341 Attached to 0000:00:10.0 00:15:04.341 Attached to 0000:00:11.0 00:15:04.341 Attached to 0000:00:13.0 00:15:04.341 Attached to 0000:00:12.0 00:15:04.341 Reset controller to setup AER completions for this process 00:15:04.341 Registering asynchronous event callbacks... 00:15:04.341 Getting orig temperature thresholds of all controllers 00:15:04.341 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:04.341 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:04.341 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:04.341 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:04.341 Setting all controllers temperature threshold low to trigger AER 00:15:04.341 Waiting for all controllers temperature threshold to be set lower 00:15:04.341 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:04.341 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:04.341 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:04.341 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:04.341 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:04.341 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:04.341 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:04.341 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:04.341 Waiting for all controllers to trigger AER and reset threshold 00:15:04.341 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:04.341 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:04.341 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:04.341 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:04.341 Cleaning up... 00:15:04.341 ************************************ 00:15:04.341 END TEST nvme_single_aen 00:15:04.341 ************************************ 00:15:04.341 00:15:04.341 real 0m0.310s 00:15:04.341 user 0m0.120s 00:15:04.341 sys 0m0.147s 00:15:04.341 13:48:51 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.341 13:48:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:15:04.599 13:48:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:15:04.599 13:48:51 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:04.599 13:48:51 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.599 13:48:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.599 ************************************ 00:15:04.599 START TEST nvme_doorbell_aers 00:15:04.599 ************************************ 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:04.599 13:48:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:05.187 [2024-11-04 13:48:51.779678] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:15.196 Executing: test_write_invalid_db 00:15:15.196 Waiting for AER completion... 00:15:15.196 Failure: test_write_invalid_db 00:15:15.196 00:15:15.196 Executing: test_invalid_db_write_overflow_sq 00:15:15.196 Waiting for AER completion... 00:15:15.196 Failure: test_invalid_db_write_overflow_sq 00:15:15.196 00:15:15.196 Executing: test_invalid_db_write_overflow_cq 00:15:15.196 Waiting for AER completion... 00:15:15.196 Failure: test_invalid_db_write_overflow_cq 00:15:15.196 00:15:15.196 13:49:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:15.196 13:49:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:15.196 [2024-11-04 13:49:01.778845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:25.169 Executing: test_write_invalid_db 00:15:25.169 Waiting for AER completion... 00:15:25.169 Failure: test_write_invalid_db 00:15:25.169 00:15:25.169 Executing: test_invalid_db_write_overflow_sq 00:15:25.169 Waiting for AER completion... 00:15:25.169 Failure: test_invalid_db_write_overflow_sq 00:15:25.169 00:15:25.169 Executing: test_invalid_db_write_overflow_cq 00:15:25.169 Waiting for AER completion... 00:15:25.169 Failure: test_invalid_db_write_overflow_cq 00:15:25.169 00:15:25.169 13:49:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:25.169 13:49:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:25.169 [2024-11-04 13:49:11.839662] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:35.208 Executing: test_write_invalid_db 00:15:35.208 Waiting for AER completion... 00:15:35.208 Failure: test_write_invalid_db 00:15:35.208 00:15:35.208 Executing: test_invalid_db_write_overflow_sq 00:15:35.208 Waiting for AER completion... 00:15:35.208 Failure: test_invalid_db_write_overflow_sq 00:15:35.208 00:15:35.208 Executing: test_invalid_db_write_overflow_cq 00:15:35.208 Waiting for AER completion... 00:15:35.208 Failure: test_invalid_db_write_overflow_cq 00:15:35.208 00:15:35.208 13:49:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:35.208 13:49:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:35.208 [2024-11-04 13:49:21.885349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.180 Executing: test_write_invalid_db 00:15:45.180 Waiting for AER completion... 00:15:45.180 Failure: test_write_invalid_db 00:15:45.180 00:15:45.180 Executing: test_invalid_db_write_overflow_sq 00:15:45.180 Waiting for AER completion... 00:15:45.180 Failure: test_invalid_db_write_overflow_sq 00:15:45.180 00:15:45.180 Executing: test_invalid_db_write_overflow_cq 00:15:45.180 Waiting for AER completion... 00:15:45.181 Failure: test_invalid_db_write_overflow_cq 00:15:45.181 00:15:45.181 ************************************ 00:15:45.181 END TEST nvme_doorbell_aers 00:15:45.181 ************************************ 00:15:45.181 00:15:45.181 real 0m40.301s 00:15:45.181 user 0m28.345s 00:15:45.181 sys 0m11.515s 00:15:45.181 13:49:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:45.181 13:49:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:15:45.181 13:49:31 nvme -- nvme/nvme.sh@97 -- # uname 00:15:45.181 13:49:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:15:45.181 13:49:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:45.181 13:49:31 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:15:45.181 13:49:31 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:45.181 13:49:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.181 ************************************ 00:15:45.181 START TEST nvme_multi_aen 00:15:45.181 ************************************ 00:15:45.181 13:49:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:45.181 [2024-11-04 13:49:32.031944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.032306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.032451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.035069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.035280] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.035414] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.037073] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.037292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.037328] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.039292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.039493] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 [2024-11-04 13:49:32.039658] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65383) is not found. Dropping the request. 00:15:45.181 Child process pid: 65902 00:15:45.746 [Child] Asynchronous Event Request test 00:15:45.746 [Child] Attached to 0000:00:10.0 00:15:45.746 [Child] Attached to 0000:00:11.0 00:15:45.746 [Child] Attached to 0000:00:13.0 00:15:45.746 [Child] Attached to 0000:00:12.0 00:15:45.746 [Child] Registering asynchronous event callbacks... 00:15:45.746 [Child] Getting orig temperature thresholds of all controllers 00:15:45.746 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 [Child] Waiting for all controllers to trigger AER and reset threshold 00:15:45.746 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 [Child] Cleaning up... 00:15:45.746 Asynchronous Event Request test 00:15:45.746 Attached to 0000:00:10.0 00:15:45.746 Attached to 0000:00:11.0 00:15:45.746 Attached to 0000:00:13.0 00:15:45.746 Attached to 0000:00:12.0 00:15:45.746 Reset controller to setup AER completions for this process 00:15:45.746 Registering asynchronous event callbacks... 00:15:45.746 Getting orig temperature thresholds of all controllers 00:15:45.746 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:45.746 Setting all controllers temperature threshold low to trigger AER 00:15:45.746 Waiting for all controllers temperature threshold to be set lower 00:15:45.746 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:45.746 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:45.746 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:45.746 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:45.746 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:45.746 Waiting for all controllers to trigger AER and reset threshold 00:15:45.746 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:45.746 Cleaning up... 00:15:45.746 00:15:45.746 real 0m0.794s 00:15:45.746 user 0m0.278s 00:15:45.746 sys 0m0.381s 00:15:45.746 ************************************ 00:15:45.746 END TEST nvme_multi_aen 00:15:45.746 ************************************ 00:15:45.746 13:49:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:45.746 13:49:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:15:45.746 13:49:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:45.746 13:49:32 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:45.746 13:49:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:45.746 13:49:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.746 ************************************ 00:15:45.746 START TEST nvme_startup 00:15:45.746 ************************************ 00:15:45.746 13:49:32 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:46.042 Initializing NVMe Controllers 00:15:46.043 Attached to 0000:00:10.0 00:15:46.043 Attached to 0000:00:11.0 00:15:46.043 Attached to 0000:00:13.0 00:15:46.043 Attached to 0000:00:12.0 00:15:46.043 Initialization complete. 00:15:46.043 Time used:264955.500 (us). 00:15:46.043 ************************************ 00:15:46.043 END TEST nvme_startup 00:15:46.043 ************************************ 00:15:46.043 00:15:46.043 real 0m0.389s 00:15:46.043 user 0m0.127s 00:15:46.043 sys 0m0.214s 00:15:46.043 13:49:32 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:46.043 13:49:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:15:46.301 13:49:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:15:46.301 13:49:32 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:46.301 13:49:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:46.301 13:49:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.301 ************************************ 00:15:46.301 START TEST nvme_multi_secondary 00:15:46.301 ************************************ 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65958 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65959 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:15:46.301 13:49:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:49.582 Initializing NVMe Controllers 00:15:49.582 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:49.582 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:49.582 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:49.582 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:49.582 Initialization complete. Launching workers. 00:15:49.582 ======================================================== 00:15:49.582 Latency(us) 00:15:49.582 Device Information : IOPS MiB/s Average min max 00:15:49.582 PCIE (0000:00:10.0) NSID 1 from core 1: 5092.39 19.89 3140.08 1202.06 7534.57 00:15:49.582 PCIE (0000:00:11.0) NSID 1 from core 1: 5092.39 19.89 3141.59 1232.28 7340.04 00:15:49.582 PCIE (0000:00:13.0) NSID 1 from core 1: 5092.39 19.89 3141.73 1206.55 7484.13 00:15:49.582 PCIE (0000:00:12.0) NSID 1 from core 1: 5097.71 19.91 3138.48 1224.90 7301.33 00:15:49.582 PCIE (0000:00:12.0) NSID 2 from core 1: 5097.71 19.91 3138.54 1209.73 7432.38 00:15:49.582 PCIE (0000:00:12.0) NSID 3 from core 1: 5097.71 19.91 3138.69 1236.07 7371.90 00:15:49.582 ======================================================== 00:15:49.582 Total : 30570.31 119.42 3139.85 1202.06 7534.57 00:15:49.582 00:15:49.840 Initializing NVMe Controllers 00:15:49.840 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:49.840 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:49.840 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:49.840 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:49.840 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:49.840 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:49.840 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:49.840 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:49.840 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:49.840 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:49.840 Initialization complete. Launching workers. 00:15:49.840 ======================================================== 00:15:49.840 Latency(us) 00:15:49.840 Device Information : IOPS MiB/s Average min max 00:15:49.840 PCIE (0000:00:10.0) NSID 1 from core 2: 2141.94 8.37 7467.34 1889.92 15901.92 00:15:49.840 PCIE (0000:00:11.0) NSID 1 from core 2: 2141.94 8.37 7469.15 1796.51 16264.26 00:15:49.840 PCIE (0000:00:13.0) NSID 1 from core 2: 2141.94 8.37 7469.99 1959.81 13902.05 00:15:49.840 PCIE (0000:00:12.0) NSID 1 from core 2: 2141.94 8.37 7469.32 1900.06 17616.02 00:15:49.840 PCIE (0000:00:12.0) NSID 2 from core 2: 2141.94 8.37 7469.22 1573.98 17738.78 00:15:49.840 PCIE (0000:00:12.0) NSID 3 from core 2: 2141.94 8.37 7469.79 1748.30 14910.70 00:15:49.840 ======================================================== 00:15:49.840 Total : 12851.66 50.20 7469.14 1573.98 17738.78 00:15:49.840 00:15:50.098 13:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65958 00:15:52.056 Initializing NVMe Controllers 00:15:52.056 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:52.056 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:52.056 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:52.056 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:52.056 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:52.056 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:52.056 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:52.056 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:52.056 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:52.056 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:52.056 Initialization complete. Launching workers. 00:15:52.056 ======================================================== 00:15:52.056 Latency(us) 00:15:52.056 Device Information : IOPS MiB/s Average min max 00:15:52.056 PCIE (0000:00:10.0) NSID 1 from core 0: 7493.02 29.27 2133.67 1011.04 12825.39 00:15:52.056 PCIE (0000:00:11.0) NSID 1 from core 0: 7493.02 29.27 2134.79 1039.71 13051.97 00:15:52.056 PCIE (0000:00:13.0) NSID 1 from core 0: 7493.02 29.27 2134.72 962.03 12466.03 00:15:52.056 PCIE (0000:00:12.0) NSID 1 from core 0: 7496.22 29.28 2133.74 908.84 12517.55 00:15:52.056 PCIE (0000:00:12.0) NSID 2 from core 0: 7493.02 29.27 2134.59 843.65 12986.62 00:15:52.056 PCIE (0000:00:12.0) NSID 3 from core 0: 7493.02 29.27 2134.52 821.06 12765.90 00:15:52.056 ======================================================== 00:15:52.056 Total : 44961.31 175.63 2134.34 821.06 13051.97 00:15:52.056 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65959 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66028 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66029 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:52.056 13:49:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:15:55.364 Initializing NVMe Controllers 00:15:55.364 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:55.364 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:55.364 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:55.364 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:55.364 Initialization complete. Launching workers. 00:15:55.364 ======================================================== 00:15:55.364 Latency(us) 00:15:55.364 Device Information : IOPS MiB/s Average min max 00:15:55.364 PCIE (0000:00:10.0) NSID 1 from core 0: 5339.11 20.86 2994.93 946.54 16805.51 00:15:55.364 PCIE (0000:00:11.0) NSID 1 from core 0: 5339.11 20.86 2996.27 973.26 16813.48 00:15:55.364 PCIE (0000:00:13.0) NSID 1 from core 0: 5339.11 20.86 2996.23 984.04 16970.99 00:15:55.364 PCIE (0000:00:12.0) NSID 1 from core 0: 5339.11 20.86 2996.17 967.51 16653.27 00:15:55.364 PCIE (0000:00:12.0) NSID 2 from core 0: 5339.11 20.86 2996.13 953.82 15867.87 00:15:55.364 PCIE (0000:00:12.0) NSID 3 from core 0: 5339.11 20.86 2996.15 968.83 16040.67 00:15:55.364 ======================================================== 00:15:55.364 Total : 32034.69 125.14 2995.98 946.54 16970.99 00:15:55.364 00:15:55.364 Initializing NVMe Controllers 00:15:55.364 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:55.364 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:55.364 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:55.364 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:55.364 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:55.364 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:55.364 Initialization complete. Launching workers. 00:15:55.364 ======================================================== 00:15:55.364 Latency(us) 00:15:55.364 Device Information : IOPS MiB/s Average min max 00:15:55.364 PCIE (0000:00:10.0) NSID 1 from core 1: 5053.99 19.74 3163.85 1173.82 11116.57 00:15:55.364 PCIE (0000:00:11.0) NSID 1 from core 1: 5053.99 19.74 3164.98 1124.38 10578.46 00:15:55.364 PCIE (0000:00:13.0) NSID 1 from core 1: 5053.99 19.74 3164.80 1151.71 10926.78 00:15:55.364 PCIE (0000:00:12.0) NSID 1 from core 1: 5053.99 19.74 3164.64 1078.05 11456.02 00:15:55.364 PCIE (0000:00:12.0) NSID 2 from core 1: 5053.99 19.74 3164.45 1044.09 11782.74 00:15:55.364 PCIE (0000:00:12.0) NSID 3 from core 1: 5053.99 19.74 3164.27 981.51 11558.51 00:15:55.364 ======================================================== 00:15:55.364 Total : 30323.95 118.45 3164.50 981.51 11782.74 00:15:55.364 00:15:57.264 Initializing NVMe Controllers 00:15:57.264 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:57.264 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:57.264 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:57.264 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:57.264 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:57.264 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:57.264 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:57.264 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:57.264 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:57.264 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:57.264 Initialization complete. Launching workers. 00:15:57.264 ======================================================== 00:15:57.264 Latency(us) 00:15:57.264 Device Information : IOPS MiB/s Average min max 00:15:57.264 PCIE (0000:00:10.0) NSID 1 from core 2: 3106.54 12.13 5148.75 1126.60 17809.82 00:15:57.264 PCIE (0000:00:11.0) NSID 1 from core 2: 3106.54 12.13 5149.40 1171.32 16818.03 00:15:57.264 PCIE (0000:00:13.0) NSID 1 from core 2: 3106.54 12.13 5149.33 1144.00 15871.56 00:15:57.264 PCIE (0000:00:12.0) NSID 1 from core 2: 3106.54 12.13 5149.54 1138.52 16383.84 00:15:57.264 PCIE (0000:00:12.0) NSID 2 from core 2: 3106.54 12.13 5149.81 1156.93 16511.62 00:15:57.264 PCIE (0000:00:12.0) NSID 3 from core 2: 3106.54 12.13 5149.53 1030.74 17183.84 00:15:57.264 ======================================================== 00:15:57.264 Total : 18639.23 72.81 5149.39 1030.74 17809.82 00:15:57.264 00:15:57.523 ************************************ 00:15:57.523 END TEST nvme_multi_secondary 00:15:57.523 ************************************ 00:15:57.523 13:49:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66028 00:15:57.523 13:49:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66029 00:15:57.523 00:15:57.523 real 0m11.229s 00:15:57.523 user 0m18.782s 00:15:57.523 sys 0m1.225s 00:15:57.523 13:49:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:57.523 13:49:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:15:57.523 13:49:44 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:15:57.523 13:49:44 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:15:57.523 13:49:44 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64956 ]] 00:15:57.523 13:49:44 nvme -- common/autotest_common.sh@1092 -- # kill 64956 00:15:57.523 13:49:44 nvme -- common/autotest_common.sh@1093 -- # wait 64956 00:15:57.523 [2024-11-04 13:49:44.267993] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.268130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.268228] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.268291] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.273007] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.273117] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.273173] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.273226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.277286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.277527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.277588] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.277628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.281050] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.281126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.281162] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.523 [2024-11-04 13:49:44.281198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65901) is not found. Dropping the request. 00:15:57.782 13:49:44 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:15:57.782 13:49:44 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:15:57.782 13:49:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:57.782 13:49:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:57.782 13:49:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:57.782 13:49:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.782 ************************************ 00:15:57.782 START TEST bdev_nvme_reset_stuck_adm_cmd 00:15:57.782 ************************************ 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:57.782 * Looking for test storage... 00:15:57.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.782 --rc genhtml_branch_coverage=1 00:15:57.782 --rc genhtml_function_coverage=1 00:15:57.782 --rc genhtml_legend=1 00:15:57.782 --rc geninfo_all_blocks=1 00:15:57.782 --rc geninfo_unexecuted_blocks=1 00:15:57.782 00:15:57.782 ' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.782 --rc genhtml_branch_coverage=1 00:15:57.782 --rc genhtml_function_coverage=1 00:15:57.782 --rc genhtml_legend=1 00:15:57.782 --rc geninfo_all_blocks=1 00:15:57.782 --rc geninfo_unexecuted_blocks=1 00:15:57.782 00:15:57.782 ' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.782 --rc genhtml_branch_coverage=1 00:15:57.782 --rc genhtml_function_coverage=1 00:15:57.782 --rc genhtml_legend=1 00:15:57.782 --rc geninfo_all_blocks=1 00:15:57.782 --rc geninfo_unexecuted_blocks=1 00:15:57.782 00:15:57.782 ' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.782 --rc genhtml_branch_coverage=1 00:15:57.782 --rc genhtml_function_coverage=1 00:15:57.782 --rc genhtml_legend=1 00:15:57.782 --rc geninfo_all_blocks=1 00:15:57.782 --rc geninfo_unexecuted_blocks=1 00:15:57.782 00:15:57.782 ' 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:15:57.782 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:15:58.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66198 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66198 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 66198 ']' 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:58.040 13:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:58.040 [2024-11-04 13:49:44.909788] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:15:58.040 [2024-11-04 13:49:44.910016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66198 ] 00:15:58.299 [2024-11-04 13:49:45.126563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.557 [2024-11-04 13:49:45.276696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.557 [2024-11-04 13:49:45.276807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.557 [2024-11-04 13:49:45.276899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.557 [2024-11-04 13:49:45.276907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.493 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:59.493 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:15:59.493 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:15:59.493 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.493 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:59.751 nvme0n1 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_bO0tH.txt 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:59.751 true 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730728186 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66227 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:59.751 13:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:01.654 [2024-11-04 13:49:48.475157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:16:01.654 [2024-11-04 13:49:48.475649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:01.654 [2024-11-04 13:49:48.475704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.654 [2024-11-04 13:49:48.475733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.654 [2024-11-04 13:49:48.477876] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:16:01.654 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66227 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66227 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66227 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_bO0tH.txt 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_bO0tH.txt 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66198 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 66198 ']' 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 66198 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.654 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66198 00:16:01.913 killing process with pid 66198 00:16:01.913 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:01.913 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:01.913 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66198' 00:16:01.913 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 66198 00:16:01.913 13:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 66198 00:16:05.246 13:49:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:16:05.246 13:49:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:16:05.246 ************************************ 00:16:05.246 END TEST bdev_nvme_reset_stuck_adm_cmd 00:16:05.246 ************************************ 00:16:05.246 00:16:05.246 real 0m7.028s 00:16:05.246 user 0m24.716s 00:16:05.246 sys 0m0.831s 00:16:05.246 13:49:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:05.246 13:49:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 13:49:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:16:05.246 13:49:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:16:05.246 13:49:51 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:05.246 13:49:51 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:05.246 13:49:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 ************************************ 00:16:05.246 START TEST nvme_fio 00:16:05.246 ************************************ 00:16:05.246 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:16:05.246 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:05.246 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:16:05.246 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:16:05.246 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:05.246 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:16:05.247 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:05.247 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:05.247 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:05.247 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:16:05.247 13:49:51 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:05.247 13:49:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:05.504 13:49:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:05.504 13:49:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:05.504 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:16:05.505 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:05.505 13:49:52 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:05.762 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:05.762 fio-3.35 00:16:05.762 Starting 1 thread 00:16:09.054 00:16:09.054 test: (groupid=0, jobs=1): err= 0: pid=66384: Mon Nov 4 13:49:55 2024 00:16:09.054 read: IOPS=16.6k, BW=65.0MiB/s (68.2MB/s)(130MiB/2001msec) 00:16:09.054 slat (nsec): min=4494, max=98088, avg=6184.19, stdev=2374.70 00:16:09.054 clat (usec): min=250, max=10658, avg=3824.37, stdev=851.39 00:16:09.054 lat (usec): min=255, max=10715, avg=3830.55, stdev=852.67 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[ 2474], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3294], 00:16:09.054 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3687], 00:16:09.054 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 5080], 95.00th=[ 5669], 00:16:09.054 | 99.00th=[ 6521], 99.50th=[ 7308], 99.90th=[ 9241], 99.95th=[ 9896], 00:16:09.054 | 99.99th=[10421] 00:16:09.054 bw ( KiB/s): min=58160, max=70136, per=95.28%, avg=63429.33, stdev=6116.01, samples=3 00:16:09.054 iops : min=14540, max=17534, avg=15857.33, stdev=1529.00, samples=3 00:16:09.054 write: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec); 0 zone resets 00:16:09.054 slat (nsec): min=4679, max=84086, avg=6365.32, stdev=2325.18 00:16:09.054 clat (usec): min=230, max=10452, avg=3831.64, stdev=857.09 00:16:09.054 lat (usec): min=237, max=10463, avg=3838.00, stdev=858.37 00:16:09.054 clat percentiles (usec): 00:16:09.054 | 1.00th=[ 2442], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3294], 00:16:09.054 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3687], 00:16:09.054 | 70.00th=[ 3949], 80.00th=[ 4293], 90.00th=[ 5145], 95.00th=[ 5669], 00:16:09.054 | 99.00th=[ 6521], 99.50th=[ 7308], 99.90th=[ 9372], 99.95th=[ 9765], 00:16:09.054 | 99.99th=[10290] 00:16:09.054 bw ( KiB/s): min=58440, max=69448, per=94.65%, avg=63141.33, stdev=5676.87, samples=3 00:16:09.054 iops : min=14610, max=17362, avg=15785.33, stdev=1419.22, samples=3 00:16:09.054 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:16:09.054 lat (msec) : 2=0.26%, 4=71.82%, 10=27.83%, 20=0.03% 00:16:09.054 cpu : usr=99.10%, sys=0.10%, ctx=3, majf=0, minf=607 00:16:09.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:09.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.054 issued rwts: total=33302,33371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.054 00:16:09.054 Run status group 0 (all jobs): 00:16:09.054 READ: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=130MiB (136MB), run=2001-2001msec 00:16:09.054 WRITE: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:16:09.313 ----------------------------------------------------- 00:16:09.313 Suppressions used: 00:16:09.313 count bytes template 00:16:09.313 1 32 /usr/src/fio/parse.c 00:16:09.313 1 8 libtcmalloc_minimal.so 00:16:09.313 ----------------------------------------------------- 00:16:09.313 00:16:09.313 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:09.313 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:09.313 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:09.313 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:09.572 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:09.572 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:09.831 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:09.831 13:49:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:09.831 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:10.089 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:10.089 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:10.089 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:16:10.089 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:10.089 13:49:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:16:10.089 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:10.089 fio-3.35 00:16:10.089 Starting 1 thread 00:16:13.379 00:16:13.379 test: (groupid=0, jobs=1): err= 0: pid=66450: Mon Nov 4 13:50:00 2024 00:16:13.379 read: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(126MiB/2001msec) 00:16:13.379 slat (nsec): min=4636, max=54731, avg=6359.18, stdev=1699.88 00:16:13.379 clat (usec): min=241, max=10063, avg=3932.60, stdev=544.75 00:16:13.379 lat (usec): min=246, max=10074, avg=3938.95, stdev=545.44 00:16:13.379 clat percentiles (usec): 00:16:13.379 | 1.00th=[ 2933], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:16:13.379 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:16:13.379 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4948], 00:16:13.379 | 99.00th=[ 5604], 99.50th=[ 6390], 99.90th=[ 9110], 99.95th=[ 9634], 00:16:13.379 | 99.99th=[ 9896] 00:16:13.379 bw ( KiB/s): min=59096, max=69040, per=97.82%, avg=63320.00, stdev=5138.02, samples=3 00:16:13.379 iops : min=14774, max=17260, avg=15830.00, stdev=1284.51, samples=3 00:16:13.379 write: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(127MiB/2001msec); 0 zone resets 00:16:13.379 slat (nsec): min=4828, max=59985, avg=6502.45, stdev=1651.95 00:16:13.379 clat (usec): min=282, max=10164, avg=3935.48, stdev=542.26 00:16:13.379 lat (usec): min=290, max=10171, avg=3941.98, stdev=542.91 00:16:13.379 clat percentiles (usec): 00:16:13.379 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3621], 00:16:13.379 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884], 00:16:13.379 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4948], 00:16:13.379 | 99.00th=[ 5604], 99.50th=[ 6456], 99.90th=[ 8979], 99.95th=[ 9765], 00:16:13.379 | 99.99th=[ 9896] 00:16:13.379 bw ( KiB/s): min=59488, max=68416, per=97.24%, avg=63029.33, stdev=4741.44, samples=3 00:16:13.379 iops : min=14872, max=17104, avg=15757.33, stdev=1185.36, samples=3 00:16:13.379 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:16:13.379 lat (msec) : 2=0.09%, 4=67.61%, 10=32.26%, 20=0.01% 00:16:13.379 cpu : usr=99.10%, sys=0.15%, ctx=2, majf=0, minf=607 00:16:13.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:13.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.379 issued rwts: total=32381,32425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.379 00:16:13.379 Run status group 0 (all jobs): 00:16:13.379 READ: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=126MiB (133MB), run=2001-2001msec 00:16:13.379 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=127MiB (133MB), run=2001-2001msec 00:16:13.637 ----------------------------------------------------- 00:16:13.637 Suppressions used: 00:16:13.637 count bytes template 00:16:13.637 1 32 /usr/src/fio/parse.c 00:16:13.637 1 8 libtcmalloc_minimal.so 00:16:13.637 ----------------------------------------------------- 00:16:13.637 00:16:13.637 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:13.637 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:13.637 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:13.637 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:13.895 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:13.895 13:50:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:14.460 13:50:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:14.460 13:50:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:14.460 13:50:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:16:14.460 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:14.460 fio-3.35 00:16:14.460 Starting 1 thread 00:16:17.738 00:16:17.738 test: (groupid=0, jobs=1): err= 0: pid=66516: Mon Nov 4 13:50:04 2024 00:16:17.738 read: IOPS=15.0k, BW=58.4MiB/s (61.3MB/s)(117MiB/2001msec) 00:16:17.738 slat (usec): min=4, max=573, avg= 7.71, stdev= 5.54 00:16:17.738 clat (usec): min=238, max=10069, avg=4257.56, stdev=1287.52 00:16:17.738 lat (usec): min=243, max=10120, avg=4265.27, stdev=1291.02 00:16:17.738 clat percentiles (usec): 00:16:17.738 | 1.00th=[ 2540], 5.00th=[ 3032], 10.00th=[ 3163], 20.00th=[ 3294], 00:16:17.738 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3621], 60.00th=[ 3916], 00:16:17.738 | 70.00th=[ 4424], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6456], 00:16:17.738 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8848], 99.95th=[ 8979], 00:16:17.738 | 99.99th=[ 9896] 00:16:17.738 bw ( KiB/s): min=45357, max=67688, per=95.31%, avg=57012.33, stdev=11197.69, samples=3 00:16:17.738 iops : min=11339, max=16922, avg=14253.00, stdev=2799.55, samples=3 00:16:17.738 write: IOPS=15.0k, BW=58.4MiB/s (61.3MB/s)(117MiB/2001msec); 0 zone resets 00:16:17.738 slat (nsec): min=4705, max=50300, avg=8013.53, stdev=4395.95 00:16:17.738 clat (usec): min=260, max=9876, avg=4268.74, stdev=1289.75 00:16:17.738 lat (usec): min=266, max=9889, avg=4276.75, stdev=1293.14 00:16:17.738 clat percentiles (usec): 00:16:17.738 | 1.00th=[ 2507], 5.00th=[ 3032], 10.00th=[ 3163], 20.00th=[ 3294], 00:16:17.738 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3654], 60.00th=[ 3916], 00:16:17.738 | 70.00th=[ 4424], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6456], 00:16:17.738 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8848], 99.95th=[ 9110], 00:16:17.738 | 99.99th=[ 9634] 00:16:17.738 bw ( KiB/s): min=45588, max=67368, per=95.16%, avg=56934.67, stdev=10918.69, samples=3 00:16:17.738 iops : min=11397, max=16842, avg=14233.67, stdev=2729.67, samples=3 00:16:17.738 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:16:17.738 lat (msec) : 2=0.42%, 4=61.96%, 10=37.55%, 20=0.01% 00:16:17.738 cpu : usr=98.95%, sys=0.15%, ctx=2, majf=0, minf=608 00:16:17.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:17.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.738 issued rwts: total=29925,29929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.738 00:16:17.738 Run status group 0 (all jobs): 00:16:17.738 READ: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:16:17.738 WRITE: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:16:17.996 ----------------------------------------------------- 00:16:17.996 Suppressions used: 00:16:17.996 count bytes template 00:16:17.996 1 32 /usr/src/fio/parse.c 00:16:17.996 1 8 libtcmalloc_minimal.so 00:16:17.996 ----------------------------------------------------- 00:16:17.996 00:16:17.996 13:50:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:17.996 13:50:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:17.996 13:50:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:17.996 13:50:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:18.254 13:50:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:18.254 13:50:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:18.824 13:50:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:18.824 13:50:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:18.824 13:50:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:16:18.824 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:18.824 fio-3.35 00:16:18.824 Starting 1 thread 00:16:23.009 00:16:23.009 test: (groupid=0, jobs=1): err= 0: pid=66582: Mon Nov 4 13:50:09 2024 00:16:23.009 read: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(97.8MiB/2001msec) 00:16:23.009 slat (nsec): min=4569, max=65372, avg=8868.96, stdev=4665.07 00:16:23.009 clat (usec): min=230, max=11212, avg=5091.53, stdev=1352.78 00:16:23.009 lat (usec): min=235, max=11252, avg=5100.40, stdev=1356.05 00:16:23.009 clat percentiles (usec): 00:16:23.009 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3752], 00:16:23.009 | 30.00th=[ 3949], 40.00th=[ 4178], 50.00th=[ 5407], 60.00th=[ 5735], 00:16:23.009 | 70.00th=[ 5997], 80.00th=[ 6456], 90.00th=[ 6915], 95.00th=[ 7046], 00:16:23.009 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 9110], 99.95th=[ 9896], 00:16:23.009 | 99.99th=[11076] 00:16:23.009 bw ( KiB/s): min=38288, max=62568, per=97.18%, avg=48650.67, stdev=12524.23, samples=3 00:16:23.009 iops : min= 9572, max=15642, avg=12162.67, stdev=3131.06, samples=3 00:16:23.009 write: IOPS=12.5k, BW=48.9MiB/s (51.2MB/s)(97.8MiB/2001msec); 0 zone resets 00:16:23.009 slat (nsec): min=4693, max=88585, avg=9105.84, stdev=4712.69 00:16:23.009 clat (usec): min=258, max=10978, avg=5098.92, stdev=1358.06 00:16:23.009 lat (usec): min=264, max=10998, avg=5108.03, stdev=1361.36 00:16:23.009 clat percentiles (usec): 00:16:23.010 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3752], 00:16:23.010 | 30.00th=[ 3949], 40.00th=[ 4178], 50.00th=[ 5407], 60.00th=[ 5800], 00:16:23.010 | 70.00th=[ 5997], 80.00th=[ 6521], 90.00th=[ 6915], 95.00th=[ 7046], 00:16:23.010 | 99.00th=[ 7308], 99.50th=[ 7570], 99.90th=[ 9372], 99.95th=[ 9765], 00:16:23.010 | 99.99th=[10683] 00:16:23.010 bw ( KiB/s): min=38720, max=61928, per=97.42%, avg=48738.67, stdev=11924.46, samples=3 00:16:23.010 iops : min= 9680, max=15482, avg=12184.67, stdev=2981.11, samples=3 00:16:23.010 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:16:23.010 lat (msec) : 2=0.12%, 4=31.83%, 10=67.95%, 20=0.04% 00:16:23.010 cpu : usr=98.90%, sys=0.15%, ctx=2, majf=0, minf=605 00:16:23.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:23.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.010 issued rwts: total=25044,25026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.010 00:16:23.010 Run status group 0 (all jobs): 00:16:23.010 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=97.8MiB (103MB), run=2001-2001msec 00:16:23.010 WRITE: bw=48.9MiB/s (51.2MB/s), 48.9MiB/s-48.9MiB/s (51.2MB/s-51.2MB/s), io=97.8MiB (103MB), run=2001-2001msec 00:16:23.010 ----------------------------------------------------- 00:16:23.010 Suppressions used: 00:16:23.010 count bytes template 00:16:23.010 1 32 /usr/src/fio/parse.c 00:16:23.010 1 8 libtcmalloc_minimal.so 00:16:23.010 ----------------------------------------------------- 00:16:23.010 00:16:23.010 13:50:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:23.010 13:50:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:16:23.010 00:16:23.010 real 0m18.040s 00:16:23.010 user 0m14.359s 00:16:23.010 sys 0m2.054s 00:16:23.010 13:50:09 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.010 13:50:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:16:23.010 ************************************ 00:16:23.010 END TEST nvme_fio 00:16:23.010 ************************************ 00:16:23.010 00:16:23.010 real 1m34.869s 00:16:23.010 user 3m47.947s 00:16:23.010 sys 0m21.642s 00:16:23.010 13:50:09 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.010 13:50:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.010 ************************************ 00:16:23.010 END TEST nvme 00:16:23.010 ************************************ 00:16:23.010 13:50:09 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:16:23.010 13:50:09 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:23.010 13:50:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:23.010 13:50:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.010 13:50:09 -- common/autotest_common.sh@10 -- # set +x 00:16:23.010 ************************************ 00:16:23.010 START TEST nvme_scc 00:16:23.010 ************************************ 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:23.010 * Looking for test storage... 00:16:23.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@345 -- # : 1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@368 -- # return 0 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:23.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.010 --rc genhtml_branch_coverage=1 00:16:23.010 --rc genhtml_function_coverage=1 00:16:23.010 --rc genhtml_legend=1 00:16:23.010 --rc geninfo_all_blocks=1 00:16:23.010 --rc geninfo_unexecuted_blocks=1 00:16:23.010 00:16:23.010 ' 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:23.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.010 --rc genhtml_branch_coverage=1 00:16:23.010 --rc genhtml_function_coverage=1 00:16:23.010 --rc genhtml_legend=1 00:16:23.010 --rc geninfo_all_blocks=1 00:16:23.010 --rc geninfo_unexecuted_blocks=1 00:16:23.010 00:16:23.010 ' 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:23.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.010 --rc genhtml_branch_coverage=1 00:16:23.010 --rc genhtml_function_coverage=1 00:16:23.010 --rc genhtml_legend=1 00:16:23.010 --rc geninfo_all_blocks=1 00:16:23.010 --rc geninfo_unexecuted_blocks=1 00:16:23.010 00:16:23.010 ' 00:16:23.010 13:50:09 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:23.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.010 --rc genhtml_branch_coverage=1 00:16:23.010 --rc genhtml_function_coverage=1 00:16:23.010 --rc genhtml_legend=1 00:16:23.010 --rc geninfo_all_blocks=1 00:16:23.010 --rc geninfo_unexecuted_blocks=1 00:16:23.010 00:16:23.010 ' 00:16:23.010 13:50:09 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:23.010 13:50:09 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:23.010 13:50:09 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:23.010 13:50:09 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:23.010 13:50:09 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.010 13:50:09 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.270 13:50:09 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.270 13:50:09 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.270 13:50:09 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.270 13:50:09 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.270 13:50:09 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.270 13:50:09 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.270 13:50:09 nvme_scc -- paths/export.sh@5 -- # export PATH 00:16:23.270 13:50:09 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:23.270 13:50:09 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:16:23.270 13:50:09 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.270 13:50:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:16:23.270 13:50:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:16:23.270 13:50:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:16:23.270 13:50:09 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:23.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.788 Waiting for block devices as requested 00:16:23.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:23.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.045 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:24.045 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:29.329 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:29.329 13:50:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:16:29.329 13:50:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:29.329 13:50:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.329 13:50:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:29.329 13:50:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:29.329 13:50:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:29.329 13:50:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:29.329 13:50:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:29.330 13:50:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.330 13:50:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:29.330 13:50:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:29.330 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.331 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:29.332 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.333 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:29.334 13:50:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:29.335 13:50:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:29.335 13:50:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:29.335 13:50:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.335 13:50:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:29.335 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.336 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.337 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.338 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.339 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:29.340 13:50:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:29.340 13:50:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:29.340 13:50:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.340 13:50:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:29.340 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:29.603 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:29.604 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.605 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:29.606 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.607 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.608 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.609 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.610 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.611 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:29.612 13:50:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:29.612 13:50:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:29.612 13:50:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:29.612 13:50:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:29.612 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.613 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.614 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:29.615 13:50:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:16:29.615 13:50:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:16:29.874 13:50:16 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:16:29.874 13:50:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:16:29.874 13:50:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:16:29.874 13:50:16 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:30.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:31.006 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.006 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.006 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.006 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:31.006 13:50:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:31.006 13:50:17 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:31.006 13:50:17 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.006 13:50:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:31.006 ************************************ 00:16:31.006 START TEST nvme_simple_copy 00:16:31.006 ************************************ 00:16:31.006 13:50:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:31.356 Initializing NVMe Controllers 00:16:31.356 Attaching to 0000:00:10.0 00:16:31.356 Controller supports SCC. Attached to 0000:00:10.0 00:16:31.356 Namespace ID: 1 size: 6GB 00:16:31.356 Initialization complete. 00:16:31.356 00:16:31.356 Controller QEMU NVMe Ctrl (12340 ) 00:16:31.356 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:31.356 Namespace Block Size:4096 00:16:31.356 Writing LBAs 0 to 63 with Random Data 00:16:31.356 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:31.356 LBAs matching Written Data: 64 00:16:31.356 00:16:31.356 real 0m0.334s 00:16:31.356 user 0m0.128s 00:16:31.356 sys 0m0.104s 00:16:31.356 13:50:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:31.356 13:50:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:31.356 ************************************ 00:16:31.356 END TEST nvme_simple_copy 00:16:31.356 ************************************ 00:16:31.629 00:16:31.630 real 0m8.542s 00:16:31.630 user 0m1.547s 00:16:31.630 sys 0m2.007s 00:16:31.630 13:50:18 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:31.630 13:50:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:31.630 ************************************ 00:16:31.630 END TEST nvme_scc 00:16:31.630 ************************************ 00:16:31.630 13:50:18 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:31.630 13:50:18 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:31.630 13:50:18 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:31.630 13:50:18 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:31.630 13:50:18 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:31.630 13:50:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:31.630 13:50:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.630 13:50:18 -- common/autotest_common.sh@10 -- # set +x 00:16:31.630 ************************************ 00:16:31.630 START TEST nvme_fdp 00:16:31.630 ************************************ 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:16:31.630 * Looking for test storage... 00:16:31.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.630 --rc genhtml_branch_coverage=1 00:16:31.630 --rc genhtml_function_coverage=1 00:16:31.630 --rc genhtml_legend=1 00:16:31.630 --rc geninfo_all_blocks=1 00:16:31.630 --rc geninfo_unexecuted_blocks=1 00:16:31.630 00:16:31.630 ' 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.630 --rc genhtml_branch_coverage=1 00:16:31.630 --rc genhtml_function_coverage=1 00:16:31.630 --rc genhtml_legend=1 00:16:31.630 --rc geninfo_all_blocks=1 00:16:31.630 --rc geninfo_unexecuted_blocks=1 00:16:31.630 00:16:31.630 ' 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.630 --rc genhtml_branch_coverage=1 00:16:31.630 --rc genhtml_function_coverage=1 00:16:31.630 --rc genhtml_legend=1 00:16:31.630 --rc geninfo_all_blocks=1 00:16:31.630 --rc geninfo_unexecuted_blocks=1 00:16:31.630 00:16:31.630 ' 00:16:31.630 13:50:18 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:31.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.630 --rc genhtml_branch_coverage=1 00:16:31.630 --rc genhtml_function_coverage=1 00:16:31.630 --rc genhtml_legend=1 00:16:31.630 --rc geninfo_all_blocks=1 00:16:31.630 --rc geninfo_unexecuted_blocks=1 00:16:31.630 00:16:31.630 ' 00:16:31.630 13:50:18 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.630 13:50:18 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.630 13:50:18 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.630 13:50:18 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.630 13:50:18 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.630 13:50:18 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:31.630 13:50:18 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:31.630 13:50:18 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:31.630 13:50:18 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.630 13:50:18 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:32.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:32.197 Waiting for block devices as requested 00:16:32.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.455 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.455 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.713 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:37.984 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:37.984 13:50:24 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:37.984 13:50:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:37.984 13:50:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:37.984 13:50:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:37.984 13:50:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.984 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.985 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:37.986 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.987 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:37.988 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:37.989 13:50:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:37.989 13:50:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:37.989 13:50:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:37.989 13:50:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:37.989 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:37.990 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:37.991 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.992 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:37.993 13:50:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:37.993 13:50:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:37.993 13:50:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:37.993 13:50:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.993 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.994 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.995 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:37.996 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:37.997 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.998 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:37.999 13:50:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:37.999 13:50:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:37.999 13:50:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:37.999 13:50:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:37.999 13:50:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:38.260 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:38.261 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.262 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:38.263 13:50:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:38.263 13:50:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:38.263 13:50:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.398 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.398 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.398 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.398 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.398 13:50:26 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:39.398 13:50:26 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:39.398 13:50:26 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.398 13:50:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:39.398 ************************************ 00:16:39.398 START TEST nvme_flexible_data_placement 00:16:39.398 ************************************ 00:16:39.398 13:50:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:39.963 Initializing NVMe Controllers 00:16:39.963 Attaching to 0000:00:13.0 00:16:39.963 Controller supports FDP Attached to 0000:00:13.0 00:16:39.963 Namespace ID: 1 Endurance Group ID: 1 00:16:39.964 Initialization complete. 00:16:39.964 00:16:39.964 ================================== 00:16:39.964 == FDP tests for Namespace: #01 == 00:16:39.964 ================================== 00:16:39.964 00:16:39.964 Get Feature: FDP: 00:16:39.964 ================= 00:16:39.964 Enabled: Yes 00:16:39.964 FDP configuration Index: 0 00:16:39.964 00:16:39.964 FDP configurations log page 00:16:39.964 =========================== 00:16:39.964 Number of FDP configurations: 1 00:16:39.964 Version: 0 00:16:39.964 Size: 112 00:16:39.964 FDP Configuration Descriptor: 0 00:16:39.964 Descriptor Size: 96 00:16:39.964 Reclaim Group Identifier format: 2 00:16:39.964 FDP Volatile Write Cache: Not Present 00:16:39.964 FDP Configuration: Valid 00:16:39.964 Vendor Specific Size: 0 00:16:39.964 Number of Reclaim Groups: 2 00:16:39.964 Number of Recalim Unit Handles: 8 00:16:39.964 Max Placement Identifiers: 128 00:16:39.964 Number of Namespaces Suppprted: 256 00:16:39.964 Reclaim unit Nominal Size: 6000000 bytes 00:16:39.964 Estimated Reclaim Unit Time Limit: Not Reported 00:16:39.964 RUH Desc #000: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #001: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #002: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #003: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #004: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #005: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #006: RUH Type: Initially Isolated 00:16:39.964 RUH Desc #007: RUH Type: Initially Isolated 00:16:39.964 00:16:39.964 FDP reclaim unit handle usage log page 00:16:39.964 ====================================== 00:16:39.964 Number of Reclaim Unit Handles: 8 00:16:39.964 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:39.964 RUH Usage Desc #001: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #002: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #003: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #004: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #005: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #006: RUH Attributes: Unused 00:16:39.964 RUH Usage Desc #007: RUH Attributes: Unused 00:16:39.964 00:16:39.964 FDP statistics log page 00:16:39.964 ======================= 00:16:39.964 Host bytes with metadata written: 744017920 00:16:39.964 Media bytes with metadata written: 744157184 00:16:39.964 Media bytes erased: 0 00:16:39.964 00:16:39.964 FDP Reclaim unit handle status 00:16:39.964 ============================== 00:16:39.964 Number of RUHS descriptors: 2 00:16:39.964 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003a73 00:16:39.964 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:39.964 00:16:39.964 FDP write on placement id: 0 success 00:16:39.964 00:16:39.964 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:39.964 00:16:39.964 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:39.964 00:16:39.964 Get Feature: FDP Events for Placement handle: #0 00:16:39.964 ======================== 00:16:39.964 Number of FDP Events: 6 00:16:39.964 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:39.964 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:39.964 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:39.964 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:39.964 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:39.964 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:39.964 00:16:39.964 FDP events log page 00:16:39.964 =================== 00:16:39.964 Number of FDP events: 1 00:16:39.964 FDP Event #0: 00:16:39.964 Event Type: RU Not Written to Capacity 00:16:39.964 Placement Identifier: Valid 00:16:39.964 NSID: Valid 00:16:39.964 Location: Valid 00:16:39.964 Placement Identifier: 0 00:16:39.964 Event Timestamp: 9 00:16:39.964 Namespace Identifier: 1 00:16:39.964 Reclaim Group Identifier: 0 00:16:39.964 Reclaim Unit Handle Identifier: 0 00:16:39.964 00:16:39.964 FDP test passed 00:16:39.964 00:16:39.964 real 0m0.334s 00:16:39.964 user 0m0.111s 00:16:39.964 sys 0m0.120s 00:16:39.964 13:50:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.964 ************************************ 00:16:39.964 END TEST nvme_flexible_data_placement 00:16:39.964 ************************************ 00:16:39.964 13:50:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 00:16:39.964 real 0m8.358s 00:16:39.964 user 0m1.394s 00:16:39.964 sys 0m1.959s 00:16:39.964 13:50:26 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.964 13:50:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 ************************************ 00:16:39.964 END TEST nvme_fdp 00:16:39.964 ************************************ 00:16:39.964 13:50:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:39.964 13:50:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:39.964 13:50:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:39.964 13:50:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.964 13:50:26 -- common/autotest_common.sh@10 -- # set +x 00:16:39.964 ************************************ 00:16:39.964 START TEST nvme_rpc 00:16:39.964 ************************************ 00:16:39.964 13:50:26 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:39.964 * Looking for test storage... 00:16:39.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:39.964 13:50:26 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:39.964 13:50:26 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:39.964 13:50:26 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:40.223 13:50:26 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:40.223 13:50:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.224 13:50:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.224 --rc genhtml_branch_coverage=1 00:16:40.224 --rc genhtml_function_coverage=1 00:16:40.224 --rc genhtml_legend=1 00:16:40.224 --rc geninfo_all_blocks=1 00:16:40.224 --rc geninfo_unexecuted_blocks=1 00:16:40.224 00:16:40.224 ' 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:16:40.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67948 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:40.224 13:50:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67948 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67948 ']' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.224 13:50:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.224 [2024-11-04 13:50:27.105029] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:16:40.224 [2024-11-04 13:50:27.105188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67948 ] 00:16:40.482 [2024-11-04 13:50:27.285170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:40.740 [2024-11-04 13:50:27.477650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.740 [2024-11-04 13:50:27.477662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.674 13:50:28 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.674 13:50:28 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:16:41.674 13:50:28 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:41.932 Nvme0n1 00:16:41.932 13:50:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:41.932 13:50:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:42.499 request: 00:16:42.499 { 00:16:42.499 "bdev_name": "Nvme0n1", 00:16:42.499 "filename": "non_existing_file", 00:16:42.499 "method": "bdev_nvme_apply_firmware", 00:16:42.499 "req_id": 1 00:16:42.499 } 00:16:42.499 Got JSON-RPC error response 00:16:42.499 response: 00:16:42.499 { 00:16:42.499 "code": -32603, 00:16:42.499 "message": "open file failed." 00:16:42.499 } 00:16:42.499 13:50:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:42.499 13:50:29 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:42.499 13:50:29 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:42.758 13:50:29 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:42.758 13:50:29 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67948 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67948 ']' 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67948 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67948 00:16:42.758 killing process with pid 67948 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67948' 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67948 00:16:42.758 13:50:29 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67948 00:16:46.044 ************************************ 00:16:46.044 END TEST nvme_rpc 00:16:46.044 ************************************ 00:16:46.044 00:16:46.044 real 0m5.552s 00:16:46.044 user 0m10.710s 00:16:46.044 sys 0m0.765s 00:16:46.044 13:50:32 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:46.044 13:50:32 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.044 13:50:32 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:46.044 13:50:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:46.044 13:50:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:46.044 13:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:46.044 ************************************ 00:16:46.044 START TEST nvme_rpc_timeouts 00:16:46.044 ************************************ 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:46.044 * Looking for test storage... 00:16:46.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.044 13:50:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.044 --rc genhtml_branch_coverage=1 00:16:46.044 --rc genhtml_function_coverage=1 00:16:46.044 --rc genhtml_legend=1 00:16:46.044 --rc geninfo_all_blocks=1 00:16:46.044 --rc geninfo_unexecuted_blocks=1 00:16:46.044 00:16:46.044 ' 00:16:46.044 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.044 --rc genhtml_branch_coverage=1 00:16:46.044 --rc genhtml_function_coverage=1 00:16:46.044 --rc genhtml_legend=1 00:16:46.045 --rc geninfo_all_blocks=1 00:16:46.045 --rc geninfo_unexecuted_blocks=1 00:16:46.045 00:16:46.045 ' 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:46.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.045 --rc genhtml_branch_coverage=1 00:16:46.045 --rc genhtml_function_coverage=1 00:16:46.045 --rc genhtml_legend=1 00:16:46.045 --rc geninfo_all_blocks=1 00:16:46.045 --rc geninfo_unexecuted_blocks=1 00:16:46.045 00:16:46.045 ' 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:46.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.045 --rc genhtml_branch_coverage=1 00:16:46.045 --rc genhtml_function_coverage=1 00:16:46.045 --rc genhtml_legend=1 00:16:46.045 --rc geninfo_all_blocks=1 00:16:46.045 --rc geninfo_unexecuted_blocks=1 00:16:46.045 00:16:46.045 ' 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68035 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68035 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68067 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:46.045 13:50:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68067 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 68067 ']' 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.045 13:50:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:46.045 [2024-11-04 13:50:32.698588] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:16:46.045 [2024-11-04 13:50:32.699006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68067 ] 00:16:46.045 [2024-11-04 13:50:32.892543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.303 [2024-11-04 13:50:33.075657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.303 [2024-11-04 13:50:33.075659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.676 13:50:34 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.676 Checking default timeout settings: 00:16:47.676 13:50:34 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:16:47.676 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:47.676 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:47.935 Making settings changes with rpc: 00:16:47.935 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:47.935 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:48.193 Check default vs. modified settings: 00:16:48.193 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:48.193 13:50:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:48.451 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:48.452 Setting action_on_timeout is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 Setting timeout_us is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:48.452 Setting timeout_admin_us is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68035 /tmp/settings_modified_68035 00:16:48.452 13:50:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68067 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 68067 ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 68067 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68067 00:16:48.452 killing process with pid 68067 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68067' 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 68067 00:16:48.452 13:50:35 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 68067 00:16:51.818 RPC TIMEOUT SETTING TEST PASSED. 00:16:51.818 13:50:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:51.818 00:16:51.818 real 0m5.833s 00:16:51.818 user 0m11.138s 00:16:51.818 sys 0m0.795s 00:16:51.818 ************************************ 00:16:51.818 END TEST nvme_rpc_timeouts 00:16:51.818 ************************************ 00:16:51.818 13:50:38 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:51.818 13:50:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:51.818 13:50:38 -- spdk/autotest.sh@239 -- # uname -s 00:16:51.818 13:50:38 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:16:51.818 13:50:38 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:51.818 13:50:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:51.818 13:50:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.818 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:16:51.818 ************************************ 00:16:51.818 START TEST sw_hotplug 00:16:51.818 ************************************ 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:51.818 * Looking for test storage... 00:16:51.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.818 13:50:38 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.818 --rc genhtml_branch_coverage=1 00:16:51.818 --rc genhtml_function_coverage=1 00:16:51.818 --rc genhtml_legend=1 00:16:51.818 --rc geninfo_all_blocks=1 00:16:51.818 --rc geninfo_unexecuted_blocks=1 00:16:51.818 00:16:51.818 ' 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.818 --rc genhtml_branch_coverage=1 00:16:51.818 --rc genhtml_function_coverage=1 00:16:51.818 --rc genhtml_legend=1 00:16:51.818 --rc geninfo_all_blocks=1 00:16:51.818 --rc geninfo_unexecuted_blocks=1 00:16:51.818 00:16:51.818 ' 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.818 --rc genhtml_branch_coverage=1 00:16:51.818 --rc genhtml_function_coverage=1 00:16:51.818 --rc genhtml_legend=1 00:16:51.818 --rc geninfo_all_blocks=1 00:16:51.818 --rc geninfo_unexecuted_blocks=1 00:16:51.818 00:16:51.818 ' 00:16:51.818 13:50:38 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.818 --rc genhtml_branch_coverage=1 00:16:51.818 --rc genhtml_function_coverage=1 00:16:51.818 --rc genhtml_legend=1 00:16:51.818 --rc geninfo_all_blocks=1 00:16:51.818 --rc geninfo_unexecuted_blocks=1 00:16:51.818 00:16:51.818 ' 00:16:51.818 13:50:38 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.334 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.334 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.334 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.334 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@233 -- # local class 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:16:52.334 13:50:39 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:16:52.334 13:50:39 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:52.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.900 Waiting for block devices as requested 00:16:52.900 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.158 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.415 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.714 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:58.714 13:50:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:16:58.714 13:50:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:58.972 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:16:58.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.972 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:16:59.229 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:16:59.883 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.883 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.883 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68957 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:16:59.884 13:50:46 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:59.884 13:50:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:00.141 Initializing NVMe Controllers 00:17:00.141 Attaching to 0000:00:10.0 00:17:00.141 Attaching to 0000:00:11.0 00:17:00.141 Attached to 0000:00:10.0 00:17:00.141 Attached to 0000:00:11.0 00:17:00.141 Initialization complete. Starting I/O... 00:17:00.141 QEMU NVMe Ctrl (12340 ): 3 I/Os completed (+3) 00:17:00.141 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:17:00.141 00:17:01.075 QEMU NVMe Ctrl (12340 ): 958 I/Os completed (+955) 00:17:01.075 QEMU NVMe Ctrl (12341 ): 980 I/Os completed (+978) 00:17:01.075 00:17:02.448 QEMU NVMe Ctrl (12340 ): 2070 I/Os completed (+1112) 00:17:02.448 QEMU NVMe Ctrl (12341 ): 2101 I/Os completed (+1121) 00:17:02.448 00:17:03.382 QEMU NVMe Ctrl (12340 ): 3375 I/Os completed (+1305) 00:17:03.382 QEMU NVMe Ctrl (12341 ): 3457 I/Os completed (+1356) 00:17:03.382 00:17:04.315 QEMU NVMe Ctrl (12340 ): 4751 I/Os completed (+1376) 00:17:04.315 QEMU NVMe Ctrl (12341 ): 5031 I/Os completed (+1574) 00:17:04.315 00:17:05.248 QEMU NVMe Ctrl (12340 ): 5971 I/Os completed (+1220) 00:17:05.248 QEMU NVMe Ctrl (12341 ): 6372 I/Os completed (+1341) 00:17:05.248 00:17:05.813 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:05.813 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:05.813 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:05.813 [2024-11-04 13:50:52.675323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:05.813 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:05.813 [2024-11-04 13:50:52.677559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.677641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.677667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.677694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:05.813 [2024-11-04 13:50:52.681778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.681852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.681882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.681914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:05.813 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:05.813 [2024-11-04 13:50:52.725321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:05.813 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:05.813 [2024-11-04 13:50:52.727899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.727977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.728022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.728052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:05.813 [2024-11-04 13:50:52.732192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.732258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.732291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:05.813 [2024-11-04 13:50:52.732318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:06.072 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:06.072 13:50:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:06.329 Attaching to 0000:00:10.0 00:17:06.329 Attached to 0000:00:10.0 00:17:06.329 13:50:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:06.329 13:50:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:06.329 13:50:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:06.329 Attaching to 0000:00:11.0 00:17:06.329 Attached to 0000:00:11.0 00:17:07.262 QEMU NVMe Ctrl (12340 ): 1179 I/Os completed (+1179) 00:17:07.262 QEMU NVMe Ctrl (12341 ): 1131 I/Os completed (+1131) 00:17:07.262 00:17:08.212 QEMU NVMe Ctrl (12340 ): 2427 I/Os completed (+1248) 00:17:08.212 QEMU NVMe Ctrl (12341 ): 2541 I/Os completed (+1410) 00:17:08.212 00:17:09.146 QEMU NVMe Ctrl (12340 ): 3831 I/Os completed (+1404) 00:17:09.147 QEMU NVMe Ctrl (12341 ): 3960 I/Os completed (+1419) 00:17:09.147 00:17:10.080 QEMU NVMe Ctrl (12340 ): 5355 I/Os completed (+1524) 00:17:10.080 QEMU NVMe Ctrl (12341 ): 5501 I/Os completed (+1541) 00:17:10.080 00:17:11.459 QEMU NVMe Ctrl (12340 ): 6751 I/Os completed (+1396) 00:17:11.459 QEMU NVMe Ctrl (12341 ): 6919 I/Os completed (+1418) 00:17:11.459 00:17:12.391 QEMU NVMe Ctrl (12340 ): 8123 I/Os completed (+1372) 00:17:12.391 QEMU NVMe Ctrl (12341 ): 8471 I/Os completed (+1552) 00:17:12.391 00:17:13.326 QEMU NVMe Ctrl (12340 ): 9411 I/Os completed (+1288) 00:17:13.327 QEMU NVMe Ctrl (12341 ): 9829 I/Os completed (+1358) 00:17:13.327 00:17:14.259 QEMU NVMe Ctrl (12340 ): 10699 I/Os completed (+1288) 00:17:14.259 QEMU NVMe Ctrl (12341 ): 11168 I/Os completed (+1339) 00:17:14.259 00:17:15.193 QEMU NVMe Ctrl (12340 ): 11991 I/Os completed (+1292) 00:17:15.193 QEMU NVMe Ctrl (12341 ): 12516 I/Os completed (+1348) 00:17:15.193 00:17:16.127 QEMU NVMe Ctrl (12340 ): 13400 I/Os completed (+1409) 00:17:16.127 QEMU NVMe Ctrl (12341 ): 14008 I/Os completed (+1492) 00:17:16.127 00:17:17.060 QEMU NVMe Ctrl (12340 ): 14893 I/Os completed (+1493) 00:17:17.060 QEMU NVMe Ctrl (12341 ): 15533 I/Os completed (+1525) 00:17:17.060 00:17:18.431 QEMU NVMe Ctrl (12340 ): 16312 I/Os completed (+1419) 00:17:18.431 QEMU NVMe Ctrl (12341 ): 16986 I/Os completed (+1453) 00:17:18.431 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:18.431 [2024-11-04 13:51:05.096835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:18.431 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:18.431 [2024-11-04 13:51:05.099586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.099682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.099717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.099751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:18.431 [2024-11-04 13:51:05.104493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.104602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.104647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.104676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:18.431 [2024-11-04 13:51:05.130433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:18.431 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:18.431 [2024-11-04 13:51:05.133011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.133084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.133125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.133155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:18.431 [2024-11-04 13:51:05.138607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.138703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.138743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 [2024-11-04 13:51:05.138779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:18.431 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:18.431 EAL: Scan for (pci) bus failed. 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:18.431 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:18.688 Attaching to 0000:00:10.0 00:17:18.688 Attached to 0000:00:10.0 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:18.688 13:51:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:18.688 Attaching to 0000:00:11.0 00:17:18.688 Attached to 0000:00:11.0 00:17:19.286 QEMU NVMe Ctrl (12340 ): 893 I/Os completed (+893) 00:17:19.286 QEMU NVMe Ctrl (12341 ): 709 I/Os completed (+709) 00:17:19.286 00:17:20.218 QEMU NVMe Ctrl (12340 ): 1940 I/Os completed (+1047) 00:17:20.218 QEMU NVMe Ctrl (12341 ): 2211 I/Os completed (+1502) 00:17:20.218 00:17:21.150 QEMU NVMe Ctrl (12340 ): 3340 I/Os completed (+1400) 00:17:21.150 QEMU NVMe Ctrl (12341 ): 3706 I/Os completed (+1495) 00:17:21.150 00:17:22.092 QEMU NVMe Ctrl (12340 ): 4572 I/Os completed (+1232) 00:17:22.092 QEMU NVMe Ctrl (12341 ): 5025 I/Os completed (+1319) 00:17:22.092 00:17:23.460 QEMU NVMe Ctrl (12340 ): 6068 I/Os completed (+1496) 00:17:23.460 QEMU NVMe Ctrl (12341 ): 6553 I/Os completed (+1528) 00:17:23.460 00:17:24.395 QEMU NVMe Ctrl (12340 ): 7483 I/Os completed (+1415) 00:17:24.395 QEMU NVMe Ctrl (12341 ): 8090 I/Os completed (+1537) 00:17:24.395 00:17:25.328 QEMU NVMe Ctrl (12340 ): 8793 I/Os completed (+1310) 00:17:25.328 QEMU NVMe Ctrl (12341 ): 9452 I/Os completed (+1362) 00:17:25.328 00:17:26.260 QEMU NVMe Ctrl (12340 ): 10173 I/Os completed (+1380) 00:17:26.260 QEMU NVMe Ctrl (12341 ): 10981 I/Os completed (+1529) 00:17:26.260 00:17:27.193 QEMU NVMe Ctrl (12340 ): 11665 I/Os completed (+1492) 00:17:27.193 QEMU NVMe Ctrl (12341 ): 12587 I/Os completed (+1606) 00:17:27.193 00:17:28.125 QEMU NVMe Ctrl (12340 ): 13237 I/Os completed (+1572) 00:17:28.125 QEMU NVMe Ctrl (12341 ): 14179 I/Os completed (+1592) 00:17:28.125 00:17:29.058 QEMU NVMe Ctrl (12340 ): 14903 I/Os completed (+1666) 00:17:29.058 QEMU NVMe Ctrl (12341 ): 15929 I/Os completed (+1750) 00:17:29.058 00:17:30.432 QEMU NVMe Ctrl (12340 ): 16487 I/Os completed (+1584) 00:17:30.432 QEMU NVMe Ctrl (12341 ): 17549 I/Os completed (+1620) 00:17:30.432 00:17:30.690 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:30.691 [2024-11-04 13:51:17.486567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:30.691 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:30.691 [2024-11-04 13:51:17.489844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.489958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.489999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.490036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:30.691 [2024-11-04 13:51:17.494701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.494801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.494839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.494873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:30.691 [2024-11-04 13:51:17.529972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:30.691 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:30.691 [2024-11-04 13:51:17.533072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.533185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.533230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.533267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:30.691 [2024-11-04 13:51:17.537473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.537581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.537628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 [2024-11-04 13:51:17.537659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:30.691 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:30.949 Attaching to 0000:00:10.0 00:17:30.949 Attached to 0000:00:10.0 00:17:30.949 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.207 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.207 13:51:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:31.207 Attaching to 0000:00:11.0 00:17:31.207 Attached to 0000:00:11.0 00:17:31.207 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:31.207 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:31.207 [2024-11-04 13:51:17.897274] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:43.438 13:51:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:43.438 13:51:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.438 13:51:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.21 00:17:43.438 13:51:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.21 00:17:43.438 13:51:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:17:43.438 13:51:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.21 00:17:43.438 13:51:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.21 2 00:17:43.438 remove_attach_helper took 43.21s to complete (handling 2 nvme drive(s)) 13:51:29 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68957 00:17:50.018 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68957) - No such process 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68957 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69495 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.018 13:51:35 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69495 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 69495 ']' 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.018 13:51:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:50.018 [2024-11-04 13:51:36.041100] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:17:50.018 [2024-11-04 13:51:36.041548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69495 ] 00:17:50.018 [2024-11-04 13:51:36.241674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.018 [2024-11-04 13:51:36.398689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:17:50.953 13:51:37 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:50.953 13:51:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.532 13:51:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.532 13:51:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.532 [2024-11-04 13:51:43.661371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:57.532 [2024-11-04 13:51:43.664409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:43.664463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:43.664489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:43.664522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:43.664537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:43.664555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:43.664583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:43.664601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:43.664629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:43.664665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:43.664683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:43.664705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 13:51:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:57.532 13:51:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:57.532 [2024-11-04 13:51:44.061395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:57.532 [2024-11-04 13:51:44.064473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:44.064527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:44.064556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:44.064602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:44.064633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:44.064650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:44.064671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:44.064685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:44.064706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 [2024-11-04 13:51:44.064721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.532 [2024-11-04 13:51:44.064741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.532 [2024-11-04 13:51:44.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.532 13:51:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.532 13:51:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.532 13:51:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.532 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:57.791 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:57.792 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.792 13:51:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:10.089 13:51:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.089 13:51:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 13:51:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:10.089 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:10.089 13:51:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.089 13:51:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 [2024-11-04 13:51:56.761696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:10.089 [2024-11-04 13:51:56.764643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.089 [2024-11-04 13:51:56.764812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.089 [2024-11-04 13:51:56.764942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.089 [2024-11-04 13:51:56.765114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.089 [2024-11-04 13:51:56.765206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.089 [2024-11-04 13:51:56.765324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.089 [2024-11-04 13:51:56.765460] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.089 [2024-11-04 13:51:56.765511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.089 [2024-11-04 13:51:56.765599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.090 [2024-11-04 13:51:56.765750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.090 [2024-11-04 13:51:56.765853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.090 [2024-11-04 13:51:56.766017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.090 13:51:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.090 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:10.090 13:51:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:10.348 [2024-11-04 13:51:57.161707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:10.348 [2024-11-04 13:51:57.164993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.348 [2024-11-04 13:51:57.165212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.348 [2024-11-04 13:51:57.165364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.348 [2024-11-04 13:51:57.165509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.348 [2024-11-04 13:51:57.165562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.348 [2024-11-04 13:51:57.165795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.348 [2024-11-04 13:51:57.165956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.348 [2024-11-04 13:51:57.166053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.348 [2024-11-04 13:51:57.166167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.348 [2024-11-04 13:51:57.166262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:10.348 [2024-11-04 13:51:57.166371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.348 [2024-11-04 13:51:57.166442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:10.606 13:51:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.606 13:51:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:10.606 13:51:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:10.606 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:10.865 13:51:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:23.061 13:52:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.061 13:52:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:23.061 13:52:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:23.061 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:23.062 [2024-11-04 13:52:09.762007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:23.062 [2024-11-04 13:52:09.766123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.062 [2024-11-04 13:52:09.766190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.062 [2024-11-04 13:52:09.766214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.062 [2024-11-04 13:52:09.766251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.062 [2024-11-04 13:52:09.766267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.062 [2024-11-04 13:52:09.766294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.062 [2024-11-04 13:52:09.766312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.062 [2024-11-04 13:52:09.766334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.062 [2024-11-04 13:52:09.766349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.062 [2024-11-04 13:52:09.766371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.062 [2024-11-04 13:52:09.766386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.062 [2024-11-04 13:52:09.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:23.062 13:52:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.062 13:52:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:23.062 13:52:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:23.062 13:52:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:23.319 [2024-11-04 13:52:10.162006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:23.319 [2024-11-04 13:52:10.165183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.319 [2024-11-04 13:52:10.165236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.319 [2024-11-04 13:52:10.165260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.319 [2024-11-04 13:52:10.165288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.319 [2024-11-04 13:52:10.165306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.319 [2024-11-04 13:52:10.165321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.319 [2024-11-04 13:52:10.165340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.319 [2024-11-04 13:52:10.165353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.319 [2024-11-04 13:52:10.165374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.319 [2024-11-04 13:52:10.165389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:23.319 [2024-11-04 13:52:10.165405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.319 [2024-11-04 13:52:10.165419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:23.577 13:52:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.577 13:52:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:23.577 13:52:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:23.577 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:23.835 13:52:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.22 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.22 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:18:36.063 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:18:36.063 13:52:22 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:36.063 13:52:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.641 13:52:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 13:52:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 [2024-11-04 13:52:28.918034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:42.641 [2024-11-04 13:52:28.921152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:28.921214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:28.921237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:28.921272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:28.921287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:28.921305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:28.921321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:28.921337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:28.921351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:28.921369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:28.921382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:28.921402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 13:52:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:42.641 13:52:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:42.641 [2024-11-04 13:52:29.418064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:42.641 [2024-11-04 13:52:29.420346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:29.420544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:29.420610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:29.420651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:29.420671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:29.420686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:29.420705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:29.420719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:29.420737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 [2024-11-04 13:52:29.420754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.641 [2024-11-04 13:52:29.420771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.641 [2024-11-04 13:52:29.420785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.641 13:52:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.641 13:52:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.641 13:52:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:42.641 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:42.906 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:43.164 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:43.164 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:43.164 13:52:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:55.404 13:52:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.404 13:52:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:55.404 13:52:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:55.404 13:52:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:55.404 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:55.404 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:55.404 [2024-11-04 13:52:42.018336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:55.404 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:55.404 [2024-11-04 13:52:42.021909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.404 [2024-11-04 13:52:42.022175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.404 [2024-11-04 13:52:42.022277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.404 [2024-11-04 13:52:42.022386] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.404 [2024-11-04 13:52:42.022437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.404 [2024-11-04 13:52:42.022605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.404 [2024-11-04 13:52:42.022797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.404 [2024-11-04 13:52:42.022922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.405 [2024-11-04 13:52:42.023041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.405 [2024-11-04 13:52:42.023149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.405 [2024-11-04 13:52:42.023170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.405 [2024-11-04 13:52:42.023190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.405 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:55.405 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:55.405 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:55.405 13:52:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.405 13:52:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:55.405 13:52:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.405 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:55.405 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:55.664 [2024-11-04 13:52:42.418349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:55.664 [2024-11-04 13:52:42.420962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.664 [2024-11-04 13:52:42.421134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.664 [2024-11-04 13:52:42.421183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.664 [2024-11-04 13:52:42.421211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.664 [2024-11-04 13:52:42.421230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.664 [2024-11-04 13:52:42.421244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.664 [2024-11-04 13:52:42.421261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.664 [2024-11-04 13:52:42.421274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.664 [2024-11-04 13:52:42.421289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.664 [2024-11-04 13:52:42.421304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:55.664 [2024-11-04 13:52:42.421319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.664 [2024-11-04 13:52:42.421332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:55.664 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:55.664 13:52:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.664 13:52:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:55.922 13:52:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:55.922 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:56.179 13:52:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:08.386 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:08.386 13:52:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.386 13:52:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:08.386 13:52:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:08.386 13:52:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.386 13:52:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:08.386 [2024-11-04 13:52:55.118643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:08.386 [2024-11-04 13:52:55.121096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.386 [2024-11-04 13:52:55.121279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.386 [2024-11-04 13:52:55.121382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.386 [2024-11-04 13:52:55.121481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.386 [2024-11-04 13:52:55.121643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.386 [2024-11-04 13:52:55.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.386 [2024-11-04 13:52:55.121911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.386 [2024-11-04 13:52:55.122014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.386 [2024-11-04 13:52:55.122164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.386 [2024-11-04 13:52:55.122337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.386 [2024-11-04 13:52:55.122433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.386 [2024-11-04 13:52:55.122584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.386 13:52:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:19:08.386 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:08.952 13:52:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.952 13:52:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:08.952 13:52:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:19:08.952 13:52:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:08.952 [2024-11-04 13:52:55.718642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:08.952 [2024-11-04 13:52:55.721749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.952 [2024-11-04 13:52:55.721942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.952 [2024-11-04 13:52:55.722100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.952 [2024-11-04 13:52:55.722390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.952 [2024-11-04 13:52:55.722440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.953 [2024-11-04 13:52:55.722592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.953 [2024-11-04 13:52:55.722652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.953 [2024-11-04 13:52:55.722684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.953 [2024-11-04 13:52:55.722796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.953 [2024-11-04 13:52:55.722853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:08.953 [2024-11-04 13:52:55.722892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:08.953 [2024-11-04 13:52:55.723011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:09.545 13:52:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.545 13:52:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:09.545 13:52:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:09.545 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:09.804 13:52:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.86 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.86 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.86 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.86 2 00:19:22.021 remove_attach_helper took 45.86s to complete (handling 2 nvme drive(s)) 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:19:22.021 13:53:08 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69495 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 69495 ']' 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 69495 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69495 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69495' 00:19:22.021 killing process with pid 69495 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@971 -- # kill 69495 00:19:22.021 13:53:08 sw_hotplug -- common/autotest_common.sh@976 -- # wait 69495 00:19:25.313 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:25.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.572 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.572 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.831 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:25.831 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:25.831 00:19:25.831 real 2m34.405s 00:19:25.831 user 1m52.354s 00:19:25.831 sys 0m22.579s 00:19:25.831 13:53:12 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:25.831 13:53:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:25.831 ************************************ 00:19:25.831 END TEST sw_hotplug 00:19:25.831 ************************************ 00:19:25.831 13:53:12 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:19:25.831 13:53:12 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:25.831 13:53:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:25.831 13:53:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:25.831 13:53:12 -- common/autotest_common.sh@10 -- # set +x 00:19:25.831 ************************************ 00:19:25.831 START TEST nvme_xnvme 00:19:25.831 ************************************ 00:19:25.831 13:53:12 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:19:26.090 * Looking for test storage... 00:19:26.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.090 --rc genhtml_branch_coverage=1 00:19:26.090 --rc genhtml_function_coverage=1 00:19:26.090 --rc genhtml_legend=1 00:19:26.090 --rc geninfo_all_blocks=1 00:19:26.090 --rc geninfo_unexecuted_blocks=1 00:19:26.090 00:19:26.090 ' 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.090 --rc genhtml_branch_coverage=1 00:19:26.090 --rc genhtml_function_coverage=1 00:19:26.090 --rc genhtml_legend=1 00:19:26.090 --rc geninfo_all_blocks=1 00:19:26.090 --rc geninfo_unexecuted_blocks=1 00:19:26.090 00:19:26.090 ' 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.090 --rc genhtml_branch_coverage=1 00:19:26.090 --rc genhtml_function_coverage=1 00:19:26.090 --rc genhtml_legend=1 00:19:26.090 --rc geninfo_all_blocks=1 00:19:26.090 --rc geninfo_unexecuted_blocks=1 00:19:26.090 00:19:26.090 ' 00:19:26.090 13:53:12 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.090 --rc genhtml_branch_coverage=1 00:19:26.090 --rc genhtml_function_coverage=1 00:19:26.090 --rc genhtml_legend=1 00:19:26.090 --rc geninfo_all_blocks=1 00:19:26.090 --rc geninfo_unexecuted_blocks=1 00:19:26.090 00:19:26.090 ' 00:19:26.090 13:53:12 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.090 13:53:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.090 13:53:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.091 13:53:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.091 13:53:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.091 13:53:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:19:26.091 13:53:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.091 13:53:12 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:19:26.091 13:53:12 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:26.091 13:53:12 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:26.091 13:53:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:26.091 ************************************ 00:19:26.091 START TEST xnvme_to_malloc_dd_copy 00:19:26.091 ************************************ 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:26.091 13:53:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:19:26.091 { 00:19:26.091 "subsystems": [ 00:19:26.091 { 00:19:26.091 "subsystem": "bdev", 00:19:26.091 "config": [ 00:19:26.091 { 00:19:26.091 "params": { 00:19:26.091 "block_size": 512, 00:19:26.091 "num_blocks": 2097152, 00:19:26.091 "name": "malloc0" 00:19:26.091 }, 00:19:26.091 "method": "bdev_malloc_create" 00:19:26.091 }, 00:19:26.091 { 00:19:26.091 "params": { 00:19:26.091 "io_mechanism": "libaio", 00:19:26.091 "filename": "/dev/nullb0", 00:19:26.091 "name": "null0" 00:19:26.091 }, 00:19:26.091 "method": "bdev_xnvme_create" 00:19:26.091 }, 00:19:26.091 { 00:19:26.091 "method": "bdev_wait_for_examine" 00:19:26.091 } 00:19:26.091 ] 00:19:26.091 } 00:19:26.091 ] 00:19:26.091 } 00:19:26.350 [2024-11-04 13:53:13.024735] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:19:26.350 [2024-11-04 13:53:13.025099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70880 ] 00:19:26.350 [2024-11-04 13:53:13.216227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.608 [2024-11-04 13:53:13.351967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.139  [2024-11-04T13:53:17.451Z] Copying: 210/1024 [MB] (210 MBps) [2024-11-04T13:53:18.018Z] Copying: 420/1024 [MB] (210 MBps) [2024-11-04T13:53:19.394Z] Copying: 636/1024 [MB] (215 MBps) [2024-11-04T13:53:19.958Z] Copying: 853/1024 [MB] (216 MBps) [2024-11-04T13:53:25.242Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:19:38.320 00:19:38.320 13:53:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:19:38.320 13:53:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:19:38.320 13:53:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:38.320 13:53:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:19:38.320 { 00:19:38.320 "subsystems": [ 00:19:38.320 { 00:19:38.320 "subsystem": "bdev", 00:19:38.320 "config": [ 00:19:38.320 { 00:19:38.320 "params": { 00:19:38.320 "block_size": 512, 00:19:38.320 "num_blocks": 2097152, 00:19:38.320 "name": "malloc0" 00:19:38.320 }, 00:19:38.320 "method": "bdev_malloc_create" 00:19:38.320 }, 00:19:38.320 { 00:19:38.320 "params": { 00:19:38.320 "io_mechanism": "libaio", 00:19:38.320 "filename": "/dev/nullb0", 00:19:38.320 "name": "null0" 00:19:38.320 }, 00:19:38.320 "method": "bdev_xnvme_create" 00:19:38.320 }, 00:19:38.320 { 00:19:38.320 "method": "bdev_wait_for_examine" 00:19:38.320 } 00:19:38.320 ] 00:19:38.320 } 00:19:38.320 ] 00:19:38.320 } 00:19:38.320 [2024-11-04 13:53:24.623751] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:19:38.320 [2024-11-04 13:53:24.624376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:19:38.320 [2024-11-04 13:53:24.810377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.320 [2024-11-04 13:53:24.942346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.853  [2024-11-04T13:53:28.712Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-04T13:53:29.647Z] Copying: 429/1024 [MB] (216 MBps) [2024-11-04T13:53:31.031Z] Copying: 641/1024 [MB] (212 MBps) [2024-11-04T13:53:31.597Z] Copying: 859/1024 [MB] (217 MBps) [2024-11-04T13:53:35.784Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:19:48.862 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:19:48.862 13:53:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:19:48.862 { 00:19:48.862 "subsystems": [ 00:19:48.862 { 00:19:48.862 "subsystem": "bdev", 00:19:48.862 "config": [ 00:19:48.862 { 00:19:48.862 "params": { 00:19:48.862 "block_size": 512, 00:19:48.862 "num_blocks": 2097152, 00:19:48.862 "name": "malloc0" 00:19:48.862 }, 00:19:48.862 "method": "bdev_malloc_create" 00:19:48.862 }, 00:19:48.862 { 00:19:48.862 "params": { 00:19:48.862 "io_mechanism": "io_uring", 00:19:48.862 "filename": "/dev/nullb0", 00:19:48.862 "name": "null0" 00:19:48.862 }, 00:19:48.862 "method": "bdev_xnvme_create" 00:19:48.862 }, 00:19:48.862 { 00:19:48.862 "method": "bdev_wait_for_examine" 00:19:48.862 } 00:19:48.862 ] 00:19:48.862 } 00:19:48.862 ] 00:19:48.862 } 00:19:48.862 [2024-11-04 13:53:35.692320] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:19:48.862 [2024-11-04 13:53:35.692764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:19:49.120 [2024-11-04 13:53:35.871606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.120 [2024-11-04 13:53:36.010059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.407  [2024-11-04T13:53:39.587Z] Copying: 235/1024 [MB] (235 MBps) [2024-11-04T13:53:40.992Z] Copying: 462/1024 [MB] (227 MBps) [2024-11-04T13:53:41.586Z] Copying: 693/1024 [MB] (231 MBps) [2024-11-04T13:53:42.209Z] Copying: 926/1024 [MB] (232 MBps) [2024-11-04T13:53:47.473Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:20:00.551 00:20:00.551 13:53:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:20:00.551 13:53:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:20:00.551 13:53:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:20:00.551 13:53:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:00.551 { 00:20:00.551 "subsystems": [ 00:20:00.551 { 00:20:00.551 "subsystem": "bdev", 00:20:00.551 "config": [ 00:20:00.551 { 00:20:00.551 "params": { 00:20:00.551 "block_size": 512, 00:20:00.551 "num_blocks": 2097152, 00:20:00.551 "name": "malloc0" 00:20:00.551 }, 00:20:00.551 "method": "bdev_malloc_create" 00:20:00.551 }, 00:20:00.551 { 00:20:00.551 "params": { 00:20:00.551 "io_mechanism": "io_uring", 00:20:00.551 "filename": "/dev/nullb0", 00:20:00.551 "name": "null0" 00:20:00.551 }, 00:20:00.551 "method": "bdev_xnvme_create" 00:20:00.551 }, 00:20:00.551 { 00:20:00.551 "method": "bdev_wait_for_examine" 00:20:00.551 } 00:20:00.551 ] 00:20:00.551 } 00:20:00.551 ] 00:20:00.551 } 00:20:00.551 [2024-11-04 13:53:46.694767] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:00.551 [2024-11-04 13:53:46.694926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71255 ] 00:20:00.551 [2024-11-04 13:53:46.872350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.551 [2024-11-04 13:53:47.002843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.082  [2024-11-04T13:53:50.945Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-04T13:53:51.879Z] Copying: 453/1024 [MB] (228 MBps) [2024-11-04T13:53:52.858Z] Copying: 687/1024 [MB] (234 MBps) [2024-11-04T13:53:53.426Z] Copying: 883/1024 [MB] (195 MBps) [2024-11-04T13:53:57.612Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:20:10.690 00:20:10.948 13:53:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:20:10.948 13:53:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:20:10.948 00:20:10.948 real 0m44.800s 00:20:10.948 user 0m39.284s 00:20:10.948 sys 0m4.891s 00:20:10.948 13:53:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.948 ************************************ 00:20:10.948 END TEST xnvme_to_malloc_dd_copy 00:20:10.948 ************************************ 00:20:10.948 13:53:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:20:10.948 13:53:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:10.948 13:53:57 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:10.948 13:53:57 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:10.948 13:53:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:10.948 ************************************ 00:20:10.948 START TEST xnvme_bdevperf 00:20:10.948 ************************************ 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:10.948 13:53:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:10.948 { 00:20:10.948 "subsystems": [ 00:20:10.948 { 00:20:10.948 "subsystem": "bdev", 00:20:10.948 "config": [ 00:20:10.948 { 00:20:10.948 "params": { 00:20:10.948 "io_mechanism": "libaio", 00:20:10.948 "filename": "/dev/nullb0", 00:20:10.948 "name": "null0" 00:20:10.948 }, 00:20:10.948 "method": "bdev_xnvme_create" 00:20:10.948 }, 00:20:10.948 { 00:20:10.948 "method": "bdev_wait_for_examine" 00:20:10.948 } 00:20:10.948 ] 00:20:10.948 } 00:20:10.948 ] 00:20:10.948 } 00:20:11.207 [2024-11-04 13:53:57.911669] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:11.207 [2024-11-04 13:53:57.912342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71398 ] 00:20:11.207 [2024-11-04 13:53:58.110052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.464 [2024-11-04 13:53:58.240707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.722 Running I/O for 5 seconds... 00:20:14.032 136512.00 IOPS, 533.25 MiB/s [2024-11-04T13:54:01.889Z] 135776.00 IOPS, 530.38 MiB/s [2024-11-04T13:54:02.824Z] 134101.33 IOPS, 523.83 MiB/s [2024-11-04T13:54:03.758Z] 136352.00 IOPS, 532.62 MiB/s [2024-11-04T13:54:03.758Z] 133478.40 IOPS, 521.40 MiB/s 00:20:16.836 Latency(us) 00:20:16.836 [2024-11-04T13:54:03.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.836 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:16.836 null0 : 5.00 133416.14 521.16 0.00 0.00 476.93 353.04 2371.78 00:20:16.836 [2024-11-04T13:54:03.758Z] =================================================================================================================== 00:20:16.836 [2024-11-04T13:54:03.758Z] Total : 133416.14 521.16 0.00 0.00 476.93 353.04 2371.78 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:18.212 13:54:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:18.212 { 00:20:18.212 "subsystems": [ 00:20:18.212 { 00:20:18.212 "subsystem": "bdev", 00:20:18.212 "config": [ 00:20:18.212 { 00:20:18.212 "params": { 00:20:18.212 "io_mechanism": "io_uring", 00:20:18.212 "filename": "/dev/nullb0", 00:20:18.212 "name": "null0" 00:20:18.212 }, 00:20:18.212 "method": "bdev_xnvme_create" 00:20:18.212 }, 00:20:18.212 { 00:20:18.212 "method": "bdev_wait_for_examine" 00:20:18.212 } 00:20:18.212 ] 00:20:18.212 } 00:20:18.212 ] 00:20:18.212 } 00:20:18.212 [2024-11-04 13:54:04.953526] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:18.212 [2024-11-04 13:54:04.953708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:20:18.212 [2024-11-04 13:54:05.130252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.470 [2024-11-04 13:54:05.249244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.729 Running I/O for 5 seconds... 00:20:21.039 181952.00 IOPS, 710.75 MiB/s [2024-11-04T13:54:08.896Z] 178976.00 IOPS, 699.12 MiB/s [2024-11-04T13:54:09.828Z] 179989.33 IOPS, 703.08 MiB/s [2024-11-04T13:54:10.763Z] 180752.00 IOPS, 706.06 MiB/s 00:20:23.841 Latency(us) 00:20:23.841 [2024-11-04T13:54:10.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.841 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:23.841 null0 : 5.00 181008.86 707.07 0.00 0.00 351.03 209.68 2012.89 00:20:23.841 [2024-11-04T13:54:10.763Z] =================================================================================================================== 00:20:23.841 [2024-11-04T13:54:10.763Z] Total : 181008.86 707.07 0.00 0.00 351.03 209.68 2012.89 00:20:25.230 13:54:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:20:25.230 13:54:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:20:25.230 00:20:25.230 real 0m14.109s 00:20:25.230 user 0m10.586s 00:20:25.230 sys 0m3.284s 00:20:25.230 13:54:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.230 ************************************ 00:20:25.230 END TEST xnvme_bdevperf 00:20:25.230 ************************************ 00:20:25.230 13:54:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 ************************************ 00:20:25.230 END TEST nvme_xnvme 00:20:25.230 ************************************ 00:20:25.230 00:20:25.230 real 0m59.216s 00:20:25.230 user 0m50.010s 00:20:25.230 sys 0m8.341s 00:20:25.230 13:54:11 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.230 13:54:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 13:54:11 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:25.230 13:54:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:25.230 13:54:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.230 13:54:11 -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 ************************************ 00:20:25.230 START TEST blockdev_xnvme 00:20:25.230 ************************************ 00:20:25.230 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:25.230 * Looking for test storage... 00:20:25.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:25.230 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:25.230 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:25.230 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.490 13:54:12 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:25.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.490 --rc genhtml_branch_coverage=1 00:20:25.490 --rc genhtml_function_coverage=1 00:20:25.490 --rc genhtml_legend=1 00:20:25.490 --rc geninfo_all_blocks=1 00:20:25.490 --rc geninfo_unexecuted_blocks=1 00:20:25.490 00:20:25.490 ' 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:25.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.490 --rc genhtml_branch_coverage=1 00:20:25.490 --rc genhtml_function_coverage=1 00:20:25.490 --rc genhtml_legend=1 00:20:25.490 --rc geninfo_all_blocks=1 00:20:25.490 --rc geninfo_unexecuted_blocks=1 00:20:25.490 00:20:25.490 ' 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:25.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.490 --rc genhtml_branch_coverage=1 00:20:25.490 --rc genhtml_function_coverage=1 00:20:25.490 --rc genhtml_legend=1 00:20:25.490 --rc geninfo_all_blocks=1 00:20:25.490 --rc geninfo_unexecuted_blocks=1 00:20:25.490 00:20:25.490 ' 00:20:25.490 13:54:12 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:25.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.490 --rc genhtml_branch_coverage=1 00:20:25.490 --rc genhtml_function_coverage=1 00:20:25.490 --rc genhtml_legend=1 00:20:25.491 --rc geninfo_all_blocks=1 00:20:25.491 --rc geninfo_unexecuted_blocks=1 00:20:25.491 00:20:25.491 ' 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71631 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:25.491 13:54:12 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71631 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 71631 ']' 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.491 13:54:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:25.491 [2024-11-04 13:54:12.360014] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:25.491 [2024-11-04 13:54:12.360434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:20:25.750 [2024-11-04 13:54:12.565438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.010 [2024-11-04 13:54:12.742802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.946 13:54:13 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.946 13:54:13 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:20:26.946 13:54:13 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:26.946 13:54:13 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:20:26.946 13:54:13 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:26.946 13:54:13 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:26.946 13:54:13 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:27.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:27.464 Waiting for block devices as requested 00:20:27.464 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:27.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:27.722 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:27.998 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.267 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.267 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:20:33.267 nvme0n1 00:20:33.267 nvme1n1 00:20:33.268 nvme2n1 00:20:33.268 nvme2n2 00:20:33.268 nvme2n3 00:20:33.268 nvme3n1 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.268 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:33.268 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27d029a2-366b-47d5-ab1d-9278dc9a573f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "27d029a2-366b-47d5-ab1d-9278dc9a573f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "baf3906f-c0fd-48d6-91d9-e8e21a055aa6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "baf3906f-c0fd-48d6-91d9-e8e21a055aa6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8bb2ea4d-0351-499a-b9d4-2caecb259ed2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8bb2ea4d-0351-499a-b9d4-2caecb259ed2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2ba30e99-71c4-4877-b3fa-a93ef570e7dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ba30e99-71c4-4877-b3fa-a93ef570e7dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9265d208-b463-4c5a-a637-cf096c0c2e36"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9265d208-b463-4c5a-a637-cf096c0c2e36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f189d2e-c891-4480-8114-41da28aacfa9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0f189d2e-c891-4480-8114-41da28aacfa9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:33.268 13:54:20 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71631 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 71631 ']' 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 71631 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71631 00:20:33.268 killing process with pid 71631 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71631' 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 71631 00:20:33.268 13:54:20 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 71631 00:20:36.555 13:54:22 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:36.555 13:54:22 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:36.555 13:54:22 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:36.555 13:54:22 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.555 13:54:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:36.555 ************************************ 00:20:36.555 START TEST bdev_hello_world 00:20:36.555 ************************************ 00:20:36.555 13:54:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:36.555 [2024-11-04 13:54:22.866956] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:36.555 [2024-11-04 13:54:22.867375] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:20:36.555 [2024-11-04 13:54:23.071421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.555 [2024-11-04 13:54:23.194958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.815 [2024-11-04 13:54:23.688884] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:36.815 [2024-11-04 13:54:23.688946] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:36.815 [2024-11-04 13:54:23.688974] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:36.815 [2024-11-04 13:54:23.691417] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:36.815 [2024-11-04 13:54:23.691779] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:36.815 [2024-11-04 13:54:23.691800] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:36.815 [2024-11-04 13:54:23.692038] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:36.815 00:20:36.815 [2024-11-04 13:54:23.692062] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:38.190 00:20:38.190 real 0m2.172s 00:20:38.190 ************************************ 00:20:38.190 END TEST bdev_hello_world 00:20:38.190 ************************************ 00:20:38.190 user 0m1.769s 00:20:38.190 sys 0m0.281s 00:20:38.190 13:54:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.190 13:54:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:38.190 13:54:24 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:38.190 13:54:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:38.190 13:54:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.190 13:54:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:38.190 ************************************ 00:20:38.190 START TEST bdev_bounds 00:20:38.190 ************************************ 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:20:38.190 Process bdevio pid: 72058 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72058 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72058' 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72058 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 72058 ']' 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.190 13:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:38.448 [2024-11-04 13:54:25.112043] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:38.448 [2024-11-04 13:54:25.112230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72058 ] 00:20:38.448 [2024-11-04 13:54:25.320517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:38.707 [2024-11-04 13:54:25.509402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.707 [2024-11-04 13:54:25.509488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.707 [2024-11-04 13:54:25.509497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.275 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.275 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:20:39.275 13:54:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:39.534 I/O targets: 00:20:39.534 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:39.534 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:39.534 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:39.534 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:39.534 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:39.534 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:39.534 00:20:39.534 00:20:39.534 CUnit - A unit testing framework for C - Version 2.1-3 00:20:39.534 http://cunit.sourceforge.net/ 00:20:39.534 00:20:39.534 00:20:39.534 Suite: bdevio tests on: nvme3n1 00:20:39.534 Test: blockdev write read block ...passed 00:20:39.534 Test: blockdev write zeroes read block ...passed 00:20:39.534 Test: blockdev write zeroes read no split ...passed 00:20:39.534 Test: blockdev write zeroes read split ...passed 00:20:39.534 Test: blockdev write zeroes read split partial ...passed 00:20:39.534 Test: blockdev reset ...passed 00:20:39.534 Test: blockdev write read 8 blocks ...passed 00:20:39.534 Test: blockdev write read size > 128k ...passed 00:20:39.534 Test: blockdev write read invalid size ...passed 00:20:39.534 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.534 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.534 Test: blockdev write read max offset ...passed 00:20:39.534 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.534 Test: blockdev writev readv 8 blocks ...passed 00:20:39.534 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.534 Test: blockdev writev readv block ...passed 00:20:39.534 Test: blockdev writev readv size > 128k ...passed 00:20:39.534 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.534 Test: blockdev comparev and writev ...passed 00:20:39.534 Test: blockdev nvme passthru rw ...passed 00:20:39.534 Test: blockdev nvme passthru vendor specific ...passed 00:20:39.534 Test: blockdev nvme admin passthru ...passed 00:20:39.534 Test: blockdev copy ...passed 00:20:39.534 Suite: bdevio tests on: nvme2n3 00:20:39.534 Test: blockdev write read block ...passed 00:20:39.534 Test: blockdev write zeroes read block ...passed 00:20:39.534 Test: blockdev write zeroes read no split ...passed 00:20:39.534 Test: blockdev write zeroes read split ...passed 00:20:39.534 Test: blockdev write zeroes read split partial ...passed 00:20:39.534 Test: blockdev reset ...passed 00:20:39.534 Test: blockdev write read 8 blocks ...passed 00:20:39.534 Test: blockdev write read size > 128k ...passed 00:20:39.534 Test: blockdev write read invalid size ...passed 00:20:39.534 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.534 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.534 Test: blockdev write read max offset ...passed 00:20:39.534 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.534 Test: blockdev writev readv 8 blocks ...passed 00:20:39.534 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.534 Test: blockdev writev readv block ...passed 00:20:39.534 Test: blockdev writev readv size > 128k ...passed 00:20:39.534 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.534 Test: blockdev comparev and writev ...passed 00:20:39.534 Test: blockdev nvme passthru rw ...passed 00:20:39.534 Test: blockdev nvme passthru vendor specific ...passed 00:20:39.534 Test: blockdev nvme admin passthru ...passed 00:20:39.534 Test: blockdev copy ...passed 00:20:39.534 Suite: bdevio tests on: nvme2n2 00:20:39.534 Test: blockdev write read block ...passed 00:20:39.534 Test: blockdev write zeroes read block ...passed 00:20:39.534 Test: blockdev write zeroes read no split ...passed 00:20:39.813 Test: blockdev write zeroes read split ...passed 00:20:39.813 Test: blockdev write zeroes read split partial ...passed 00:20:39.813 Test: blockdev reset ...passed 00:20:39.813 Test: blockdev write read 8 blocks ...passed 00:20:39.813 Test: blockdev write read size > 128k ...passed 00:20:39.813 Test: blockdev write read invalid size ...passed 00:20:39.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.813 Test: blockdev write read max offset ...passed 00:20:39.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.813 Test: blockdev writev readv 8 blocks ...passed 00:20:39.813 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.813 Test: blockdev writev readv block ...passed 00:20:39.813 Test: blockdev writev readv size > 128k ...passed 00:20:39.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.813 Test: blockdev comparev and writev ...passed 00:20:39.813 Test: blockdev nvme passthru rw ...passed 00:20:39.813 Test: blockdev nvme passthru vendor specific ...passed 00:20:39.813 Test: blockdev nvme admin passthru ...passed 00:20:39.813 Test: blockdev copy ...passed 00:20:39.813 Suite: bdevio tests on: nvme2n1 00:20:39.813 Test: blockdev write read block ...passed 00:20:39.813 Test: blockdev write zeroes read block ...passed 00:20:39.813 Test: blockdev write zeroes read no split ...passed 00:20:39.813 Test: blockdev write zeroes read split ...passed 00:20:39.813 Test: blockdev write zeroes read split partial ...passed 00:20:39.813 Test: blockdev reset ...passed 00:20:39.813 Test: blockdev write read 8 blocks ...passed 00:20:39.813 Test: blockdev write read size > 128k ...passed 00:20:39.813 Test: blockdev write read invalid size ...passed 00:20:39.813 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.813 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.813 Test: blockdev write read max offset ...passed 00:20:39.813 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.813 Test: blockdev writev readv 8 blocks ...passed 00:20:39.813 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.813 Test: blockdev writev readv block ...passed 00:20:39.813 Test: blockdev writev readv size > 128k ...passed 00:20:39.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.813 Test: blockdev comparev and writev ...passed 00:20:39.813 Test: blockdev nvme passthru rw ...passed 00:20:39.813 Test: blockdev nvme passthru vendor specific ...passed 00:20:39.813 Test: blockdev nvme admin passthru ...passed 00:20:39.814 Test: blockdev copy ...passed 00:20:39.814 Suite: bdevio tests on: nvme1n1 00:20:39.814 Test: blockdev write read block ...passed 00:20:39.814 Test: blockdev write zeroes read block ...passed 00:20:39.814 Test: blockdev write zeroes read no split ...passed 00:20:39.814 Test: blockdev write zeroes read split ...passed 00:20:39.814 Test: blockdev write zeroes read split partial ...passed 00:20:39.814 Test: blockdev reset ...passed 00:20:39.814 Test: blockdev write read 8 blocks ...passed 00:20:39.814 Test: blockdev write read size > 128k ...passed 00:20:39.814 Test: blockdev write read invalid size ...passed 00:20:39.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.814 Test: blockdev write read max offset ...passed 00:20:39.814 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.814 Test: blockdev writev readv 8 blocks ...passed 00:20:39.814 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.814 Test: blockdev writev readv block ...passed 00:20:39.814 Test: blockdev writev readv size > 128k ...passed 00:20:39.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.814 Test: blockdev comparev and writev ...passed 00:20:39.814 Test: blockdev nvme passthru rw ...passed 00:20:39.814 Test: blockdev nvme passthru vendor specific ...passed 00:20:39.814 Test: blockdev nvme admin passthru ...passed 00:20:39.814 Test: blockdev copy ...passed 00:20:39.814 Suite: bdevio tests on: nvme0n1 00:20:39.814 Test: blockdev write read block ...passed 00:20:39.814 Test: blockdev write zeroes read block ...passed 00:20:39.814 Test: blockdev write zeroes read no split ...passed 00:20:40.072 Test: blockdev write zeroes read split ...passed 00:20:40.072 Test: blockdev write zeroes read split partial ...passed 00:20:40.072 Test: blockdev reset ...passed 00:20:40.072 Test: blockdev write read 8 blocks ...passed 00:20:40.072 Test: blockdev write read size > 128k ...passed 00:20:40.072 Test: blockdev write read invalid size ...passed 00:20:40.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:40.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:40.072 Test: blockdev write read max offset ...passed 00:20:40.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:40.072 Test: blockdev writev readv 8 blocks ...passed 00:20:40.072 Test: blockdev writev readv 30 x 1block ...passed 00:20:40.072 Test: blockdev writev readv block ...passed 00:20:40.072 Test: blockdev writev readv size > 128k ...passed 00:20:40.072 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:40.072 Test: blockdev comparev and writev ...passed 00:20:40.072 Test: blockdev nvme passthru rw ...passed 00:20:40.072 Test: blockdev nvme passthru vendor specific ...passed 00:20:40.072 Test: blockdev nvme admin passthru ...passed 00:20:40.072 Test: blockdev copy ...passed 00:20:40.072 00:20:40.072 Run Summary: Type Total Ran Passed Failed Inactive 00:20:40.072 suites 6 6 n/a 0 0 00:20:40.072 tests 138 138 138 0 0 00:20:40.072 asserts 780 780 780 0 n/a 00:20:40.072 00:20:40.072 Elapsed time = 1.613 seconds 00:20:40.072 0 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72058 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 72058 ']' 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 72058 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72058 00:20:40.072 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:40.073 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:40.073 killing process with pid 72058 00:20:40.073 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72058' 00:20:40.073 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 72058 00:20:40.073 13:54:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 72058 00:20:41.450 13:54:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:41.450 00:20:41.450 real 0m3.122s 00:20:41.450 user 0m7.777s 00:20:41.450 sys 0m0.489s 00:20:41.450 13:54:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:41.450 ************************************ 00:20:41.450 END TEST bdev_bounds 00:20:41.450 ************************************ 00:20:41.450 13:54:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:41.450 13:54:28 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:20:41.450 13:54:28 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:41.450 13:54:28 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:41.450 13:54:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.450 ************************************ 00:20:41.450 START TEST bdev_nbd 00:20:41.450 ************************************ 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72133 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72133 /var/tmp/spdk-nbd.sock 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 72133 ']' 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:41.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:41.450 13:54:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:41.450 [2024-11-04 13:54:28.310161] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:20:41.450 [2024-11-04 13:54:28.310366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.747 [2024-11-04 13:54:28.508480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.747 [2024-11-04 13:54:28.641775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:42.315 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:42.887 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.888 1+0 records in 00:20:42.888 1+0 records out 00:20:42.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496671 s, 8.2 MB/s 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:42.888 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.150 1+0 records in 00:20:43.150 1+0 records out 00:20:43.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813654 s, 5.0 MB/s 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:43.150 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.409 1+0 records in 00:20:43.409 1+0 records out 00:20:43.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593872 s, 6.9 MB/s 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:43.409 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.053 1+0 records in 00:20:44.053 1+0 records out 00:20:44.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109105 s, 3.8 MB/s 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.053 13:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.312 1+0 records in 00:20:44.312 1+0 records out 00:20:44.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768988 s, 5.3 MB/s 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.312 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.571 1+0 records in 00:20:44.571 1+0 records out 00:20:44.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827167 s, 5.0 MB/s 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:44.571 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd0", 00:20:44.830 "bdev_name": "nvme0n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd1", 00:20:44.830 "bdev_name": "nvme1n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd2", 00:20:44.830 "bdev_name": "nvme2n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd3", 00:20:44.830 "bdev_name": "nvme2n2" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd4", 00:20:44.830 "bdev_name": "nvme2n3" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd5", 00:20:44.830 "bdev_name": "nvme3n1" 00:20:44.830 } 00:20:44.830 ]' 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd0", 00:20:44.830 "bdev_name": "nvme0n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd1", 00:20:44.830 "bdev_name": "nvme1n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd2", 00:20:44.830 "bdev_name": "nvme2n1" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd3", 00:20:44.830 "bdev_name": "nvme2n2" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd4", 00:20:44.830 "bdev_name": "nvme2n3" 00:20:44.830 }, 00:20:44.830 { 00:20:44.830 "nbd_device": "/dev/nbd5", 00:20:44.830 "bdev_name": "nvme3n1" 00:20:44.830 } 00:20:44.830 ]' 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.830 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:45.089 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.089 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:45.348 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.348 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.348 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.607 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.865 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.123 13:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:46.691 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:47.258 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.259 13:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:47.518 /dev/nbd0 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.518 1+0 records in 00:20:47.518 1+0 records out 00:20:47.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914148 s, 4.5 MB/s 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.518 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:20:47.777 /dev/nbd1 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.777 1+0 records in 00:20:47.777 1+0 records out 00:20:47.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106422 s, 3.8 MB/s 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:47.777 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:20:48.036 /dev/nbd10 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.036 1+0 records in 00:20:48.036 1+0 records out 00:20:48.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621614 s, 6.6 MB/s 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.036 13:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:20:48.296 /dev/nbd11 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.296 1+0 records in 00:20:48.296 1+0 records out 00:20:48.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613346 s, 6.7 MB/s 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.296 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:20:48.555 /dev/nbd12 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:48.555 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.813 1+0 records in 00:20:48.813 1+0 records out 00:20:48.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811293 s, 5.0 MB/s 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:48.813 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:48.814 /dev/nbd13 00:20:49.072 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:49.073 1+0 records in 00:20:49.073 1+0 records out 00:20:49.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666212 s, 6.1 MB/s 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.073 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:49.331 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd0", 00:20:49.332 "bdev_name": "nvme0n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd1", 00:20:49.332 "bdev_name": "nvme1n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd10", 00:20:49.332 "bdev_name": "nvme2n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd11", 00:20:49.332 "bdev_name": "nvme2n2" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd12", 00:20:49.332 "bdev_name": "nvme2n3" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd13", 00:20:49.332 "bdev_name": "nvme3n1" 00:20:49.332 } 00:20:49.332 ]' 00:20:49.332 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd0", 00:20:49.332 "bdev_name": "nvme0n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd1", 00:20:49.332 "bdev_name": "nvme1n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd10", 00:20:49.332 "bdev_name": "nvme2n1" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd11", 00:20:49.332 "bdev_name": "nvme2n2" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd12", 00:20:49.332 "bdev_name": "nvme2n3" 00:20:49.332 }, 00:20:49.332 { 00:20:49.332 "nbd_device": "/dev/nbd13", 00:20:49.332 "bdev_name": "nvme3n1" 00:20:49.332 } 00:20:49.332 ]' 00:20:49.332 13:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:49.332 /dev/nbd1 00:20:49.332 /dev/nbd10 00:20:49.332 /dev/nbd11 00:20:49.332 /dev/nbd12 00:20:49.332 /dev/nbd13' 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:49.332 /dev/nbd1 00:20:49.332 /dev/nbd10 00:20:49.332 /dev/nbd11 00:20:49.332 /dev/nbd12 00:20:49.332 /dev/nbd13' 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:49.332 256+0 records in 00:20:49.332 256+0 records out 00:20:49.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010321 s, 102 MB/s 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:49.332 256+0 records in 00:20:49.332 256+0 records out 00:20:49.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127851 s, 8.2 MB/s 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.332 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:49.590 256+0 records in 00:20:49.590 256+0 records out 00:20:49.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142143 s, 7.4 MB/s 00:20:49.590 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.590 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:49.590 256+0 records in 00:20:49.590 256+0 records out 00:20:49.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131934 s, 7.9 MB/s 00:20:49.590 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.590 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:49.849 256+0 records in 00:20:49.849 256+0 records out 00:20:49.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134195 s, 7.8 MB/s 00:20:49.849 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.849 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:49.849 256+0 records in 00:20:49.849 256+0 records out 00:20:49.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130164 s, 8.1 MB/s 00:20:49.849 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:49.849 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:50.108 256+0 records in 00:20:50.108 256+0 records out 00:20:50.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128019 s, 8.2 MB/s 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.108 13:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.675 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.933 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.190 13:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.448 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.706 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.964 13:54:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.529 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:52.530 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:52.787 malloc_lvol_verify 00:20:52.787 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:53.045 b7fb98ec-73c1-4dac-bc9d-d3e197d93293 00:20:53.045 13:54:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:53.303 983c95e8-6a4c-4b31-abc2-10335504fe32 00:20:53.303 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:53.561 /dev/nbd0 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:53.561 mke2fs 1.47.0 (5-Feb-2023) 00:20:53.561 Discarding device blocks: 0/4096 done 00:20:53.561 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:53.561 00:20:53.561 Allocating group tables: 0/1 done 00:20:53.561 Writing inode tables: 0/1 done 00:20:53.561 Creating journal (1024 blocks): done 00:20:53.561 Writing superblocks and filesystem accounting information: 0/1 done 00:20:53.561 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.561 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72133 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 72133 ']' 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 72133 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72133 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72133' 00:20:53.820 killing process with pid 72133 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 72133 00:20:53.820 13:54:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 72133 00:20:55.196 13:54:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:55.196 00:20:55.196 real 0m13.823s 00:20:55.196 user 0m18.480s 00:20:55.196 sys 0m5.820s 00:20:55.196 13:54:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.196 ************************************ 00:20:55.196 END TEST bdev_nbd 00:20:55.196 ************************************ 00:20:55.196 13:54:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:55.196 13:54:42 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:55.196 13:54:42 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:20:55.196 13:54:42 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:20:55.196 13:54:42 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:55.196 13:54:42 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.196 13:54:42 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.196 13:54:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:55.196 ************************************ 00:20:55.196 START TEST bdev_fio 00:20:55.196 ************************************ 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:55.196 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:20:55.196 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:55.455 ************************************ 00:20:55.455 START TEST bdev_fio_rw_verify 00:20:55.455 ************************************ 00:20:55.455 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:55.456 13:54:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:55.714 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:55.715 fio-3.35 00:20:55.715 Starting 6 threads 00:21:07.929 00:21:07.929 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72575: Mon Nov 4 13:54:53 2024 00:21:07.929 read: IOPS=26.5k, BW=104MiB/s (109MB/s)(1036MiB/10001msec) 00:21:07.929 slat (usec): min=2, max=927, avg= 7.17, stdev= 5.64 00:21:07.929 clat (usec): min=129, max=11228, avg=710.09, stdev=294.40 00:21:07.929 lat (usec): min=136, max=11240, avg=717.26, stdev=295.23 00:21:07.929 clat percentiles (usec): 00:21:07.929 | 50.000th=[ 709], 99.000th=[ 1483], 99.900th=[ 2376], 99.990th=[ 4555], 00:21:07.929 | 99.999th=[11207] 00:21:07.929 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(1050MiB/10001msec); 0 zone resets 00:21:07.929 slat (usec): min=13, max=2529, avg=28.37, stdev=34.65 00:21:07.929 clat (usec): min=92, max=8066, avg=796.70, stdev=297.85 00:21:07.929 lat (usec): min=109, max=8093, avg=825.06, stdev=301.27 00:21:07.929 clat percentiles (usec): 00:21:07.929 | 50.000th=[ 799], 99.000th=[ 1598], 99.900th=[ 2376], 99.990th=[ 5014], 00:21:07.929 | 99.999th=[ 8029] 00:21:07.929 bw ( KiB/s): min=85890, max=137184, per=99.67%, avg=107159.26, stdev=2353.79, samples=114 00:21:07.929 iops : min=21472, max=34296, avg=26789.37, stdev=588.46, samples=114 00:21:07.929 lat (usec) : 100=0.01%, 250=2.65%, 500=17.21%, 750=29.70%, 1000=33.69% 00:21:07.929 lat (msec) : 2=16.55%, 4=0.17%, 10=0.03%, 20=0.01% 00:21:07.929 cpu : usr=54.57%, sys=30.86%, ctx=7468, majf=0, minf=23012 00:21:07.929 IO depths : 1=12.1%, 2=24.7%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.929 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.929 issued rwts: total=265125,268820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:07.929 00:21:07.929 Run status group 0 (all jobs): 00:21:07.929 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1036MiB (1086MB), run=10001-10001msec 00:21:07.929 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1050MiB (1101MB), run=10001-10001msec 00:21:08.188 ----------------------------------------------------- 00:21:08.188 Suppressions used: 00:21:08.188 count bytes template 00:21:08.188 6 48 /usr/src/fio/parse.c 00:21:08.188 3507 336672 /usr/src/fio/iolog.c 00:21:08.188 1 8 libtcmalloc_minimal.so 00:21:08.188 1 904 libcrypto.so 00:21:08.188 ----------------------------------------------------- 00:21:08.188 00:21:08.188 00:21:08.188 real 0m12.935s 00:21:08.188 user 0m35.129s 00:21:08.188 sys 0m18.947s 00:21:08.188 13:54:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.188 ************************************ 00:21:08.188 END TEST bdev_fio_rw_verify 00:21:08.188 13:54:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:08.188 ************************************ 00:21:08.447 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27d029a2-366b-47d5-ab1d-9278dc9a573f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "27d029a2-366b-47d5-ab1d-9278dc9a573f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "baf3906f-c0fd-48d6-91d9-e8e21a055aa6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "baf3906f-c0fd-48d6-91d9-e8e21a055aa6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8bb2ea4d-0351-499a-b9d4-2caecb259ed2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8bb2ea4d-0351-499a-b9d4-2caecb259ed2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2ba30e99-71c4-4877-b3fa-a93ef570e7dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ba30e99-71c4-4877-b3fa-a93ef570e7dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "9265d208-b463-4c5a-a637-cf096c0c2e36"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9265d208-b463-4c5a-a637-cf096c0c2e36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f189d2e-c891-4480-8114-41da28aacfa9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0f189d2e-c891-4480-8114-41da28aacfa9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:08.448 /home/vagrant/spdk_repo/spdk 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:08.448 00:21:08.448 real 0m13.122s 00:21:08.448 user 0m35.218s 00:21:08.448 sys 0m19.047s 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:08.448 13:54:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:08.448 ************************************ 00:21:08.448 END TEST bdev_fio 00:21:08.448 ************************************ 00:21:08.448 13:54:55 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:08.448 13:54:55 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:08.448 13:54:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:08.448 13:54:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:08.448 13:54:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:08.448 ************************************ 00:21:08.448 START TEST bdev_verify 00:21:08.448 ************************************ 00:21:08.448 13:54:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:08.448 [2024-11-04 13:54:55.333797] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:08.448 [2024-11-04 13:54:55.333943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72747 ] 00:21:08.707 [2024-11-04 13:54:55.515395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:08.965 [2024-11-04 13:54:55.646248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.965 [2024-11-04 13:54:55.646249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.542 Running I/O for 5 seconds... 00:21:11.419 22560.00 IOPS, 88.12 MiB/s [2024-11-04T13:54:59.716Z] 23600.00 IOPS, 92.19 MiB/s [2024-11-04T13:55:00.652Z] 23946.67 IOPS, 93.54 MiB/s [2024-11-04T13:55:01.600Z] 24344.00 IOPS, 95.09 MiB/s [2024-11-04T13:55:01.600Z] 23571.20 IOPS, 92.08 MiB/s 00:21:14.678 Latency(us) 00:21:14.678 [2024-11-04T13:55:01.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.678 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0xa0000 00:21:14.678 nvme0n1 : 5.05 1750.26 6.84 0.00 0.00 73006.47 10298.51 80890.15 00:21:14.678 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0xa0000 length 0xa0000 00:21:14.678 nvme0n1 : 5.04 1726.39 6.74 0.00 0.00 74021.63 10360.93 80890.15 00:21:14.678 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0xbd0bd 00:21:14.678 nvme1n1 : 5.03 2878.10 11.24 0.00 0.00 44300.76 4993.22 76396.25 00:21:14.678 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:14.678 nvme1n1 : 5.05 2984.15 11.66 0.00 0.00 42579.30 5679.79 80390.83 00:21:14.678 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0x80000 00:21:14.678 nvme2n1 : 5.05 1774.65 6.93 0.00 0.00 71572.12 10173.68 70404.39 00:21:14.678 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x80000 length 0x80000 00:21:14.678 nvme2n1 : 5.03 1731.63 6.76 0.00 0.00 73603.70 9799.19 84884.72 00:21:14.678 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0x80000 00:21:14.678 nvme2n2 : 5.05 1748.60 6.83 0.00 0.00 72496.42 17601.10 80390.83 00:21:14.678 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x80000 length 0x80000 00:21:14.678 nvme2n2 : 5.04 1725.66 6.74 0.00 0.00 73690.91 20097.71 67408.46 00:21:14.678 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0x80000 00:21:14.678 nvme2n3 : 5.06 1769.40 6.91 0.00 0.00 71559.57 8987.79 80390.83 00:21:14.678 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x80000 length 0x80000 00:21:14.678 nvme2n3 : 5.06 1745.20 6.82 0.00 0.00 72737.12 5305.30 73899.64 00:21:14.678 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x0 length 0x20000 00:21:14.678 nvme3n1 : 5.07 1767.69 6.91 0.00 0.00 71587.72 3214.38 76396.25 00:21:14.678 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.678 Verification LBA range: start 0x20000 length 0x20000 00:21:14.678 nvme3n1 : 5.06 1743.92 6.81 0.00 0.00 72672.78 5118.05 84884.72 00:21:14.678 [2024-11-04T13:55:01.600Z] =================================================================================================================== 00:21:14.678 [2024-11-04T13:55:01.600Z] Total : 23345.64 91.19 0.00 0.00 65351.66 3214.38 84884.72 00:21:15.629 00:21:15.629 real 0m7.260s 00:21:15.629 user 0m11.287s 00:21:15.629 sys 0m2.036s 00:21:15.629 13:55:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:15.629 13:55:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:15.629 ************************************ 00:21:15.629 END TEST bdev_verify 00:21:15.629 ************************************ 00:21:15.629 13:55:02 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.629 13:55:02 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:15.629 13:55:02 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:15.629 13:55:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.888 ************************************ 00:21:15.888 START TEST bdev_verify_big_io 00:21:15.888 ************************************ 00:21:15.888 13:55:02 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.888 [2024-11-04 13:55:02.676542] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:15.888 [2024-11-04 13:55:02.676794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72860 ] 00:21:16.147 [2024-11-04 13:55:02.871088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.147 [2024-11-04 13:55:02.996324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.147 [2024-11-04 13:55:02.996345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.716 Running I/O for 5 seconds... 00:21:22.540 1304.00 IOPS, 81.50 MiB/s [2024-11-04T13:55:09.720Z] 3051.50 IOPS, 190.72 MiB/s 00:21:22.798 Latency(us) 00:21:22.798 [2024-11-04T13:55:09.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.798 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0xa000 00:21:22.798 nvme0n1 : 5.80 132.45 8.28 0.00 0.00 941663.90 72901.00 1270274.93 00:21:22.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0xa000 length 0xa000 00:21:22.798 nvme0n1 : 5.76 141.66 8.85 0.00 0.00 879610.61 114844.04 870817.40 00:21:22.798 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0xbd0b 00:21:22.798 nvme1n1 : 5.75 152.96 9.56 0.00 0.00 799436.39 8550.89 950708.91 00:21:22.798 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:22.798 nvme1n1 : 5.77 172.06 10.75 0.00 0.00 708540.46 10610.59 1390112.18 00:21:22.798 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0x8000 00:21:22.798 nvme2n1 : 5.76 140.38 8.77 0.00 0.00 848727.89 19473.55 906768.58 00:21:22.798 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x8000 length 0x8000 00:21:22.798 nvme2n1 : 5.77 123.34 7.71 0.00 0.00 968236.89 33704.23 870817.40 00:21:22.798 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0x8000 00:21:22.798 nvme2n2 : 5.76 147.23 9.20 0.00 0.00 789475.92 71902.35 982665.51 00:21:22.798 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x8000 length 0x8000 00:21:22.798 nvme2n2 : 5.78 78.91 4.93 0.00 0.00 1493111.56 96868.45 2876094.17 00:21:22.798 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0x8000 00:21:22.798 nvme2n3 : 5.76 86.06 5.38 0.00 0.00 1309553.24 74398.96 2620441.36 00:21:22.798 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x8000 length 0x8000 00:21:22.798 nvme2n3 : 5.78 144.06 9.00 0.00 0.00 804214.04 28586.18 1302231.53 00:21:22.798 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x0 length 0x2000 00:21:22.798 nvme3n1 : 5.81 129.51 8.09 0.00 0.00 853049.41 6459.98 2700332.86 00:21:22.798 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:22.798 Verification LBA range: start 0x2000 length 0x2000 00:21:22.798 nvme3n1 : 5.77 120.43 7.53 0.00 0.00 941847.60 13107.20 2764246.06 00:21:22.798 [2024-11-04T13:55:09.720Z] =================================================================================================================== 00:21:22.798 [2024-11-04T13:55:09.720Z] Total : 1569.08 98.07 0.00 0.00 904202.02 6459.98 2876094.17 00:21:24.702 00:21:24.702 real 0m8.626s 00:21:24.702 user 0m15.552s 00:21:24.702 sys 0m0.628s 00:21:24.702 13:55:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.702 13:55:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.702 ************************************ 00:21:24.702 END TEST bdev_verify_big_io 00:21:24.702 ************************************ 00:21:24.702 13:55:11 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:24.702 13:55:11 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:24.702 13:55:11 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.702 13:55:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.702 ************************************ 00:21:24.702 START TEST bdev_write_zeroes 00:21:24.702 ************************************ 00:21:24.702 13:55:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:24.702 [2024-11-04 13:55:11.344967] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:24.702 [2024-11-04 13:55:11.345143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72973 ] 00:21:24.702 [2024-11-04 13:55:11.532153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.959 [2024-11-04 13:55:11.687440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.527 Running I/O for 1 seconds... 00:21:26.461 68166.00 IOPS, 266.27 MiB/s 00:21:26.461 Latency(us) 00:21:26.461 [2024-11-04T13:55:13.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.461 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme0n1 : 1.02 10747.64 41.98 0.00 0.00 11896.73 7021.71 23218.47 00:21:26.461 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme1n1 : 1.02 14610.49 57.07 0.00 0.00 8742.98 4337.86 20347.37 00:21:26.461 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme2n1 : 1.03 10610.25 41.45 0.00 0.00 11979.21 6834.47 21970.16 00:21:26.461 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme2n2 : 1.03 10598.68 41.40 0.00 0.00 11986.82 6179.11 21346.01 00:21:26.461 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme2n3 : 1.03 10586.94 41.36 0.00 0.00 11989.86 6147.90 21346.01 00:21:26.461 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.461 nvme3n1 : 1.02 10634.48 41.54 0.00 0.00 11924.34 7677.07 23468.13 00:21:26.461 [2024-11-04T13:55:13.383Z] =================================================================================================================== 00:21:26.461 [2024-11-04T13:55:13.383Z] Total : 67788.48 264.80 0.00 0.00 11265.21 4337.86 23468.13 00:21:27.838 00:21:27.838 real 0m3.418s 00:21:27.838 user 0m2.554s 00:21:27.838 sys 0m0.668s 00:21:27.838 13:55:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.838 13:55:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:27.838 ************************************ 00:21:27.838 END TEST bdev_write_zeroes 00:21:27.838 ************************************ 00:21:27.838 13:55:14 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.838 13:55:14 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:27.838 13:55:14 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.838 13:55:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:27.838 ************************************ 00:21:27.839 START TEST bdev_json_nonenclosed 00:21:27.839 ************************************ 00:21:27.839 13:55:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.098 [2024-11-04 13:55:14.833228] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:28.098 [2024-11-04 13:55:14.833385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73034 ] 00:21:28.356 [2024-11-04 13:55:15.035994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.356 [2024-11-04 13:55:15.211658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.356 [2024-11-04 13:55:15.211796] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:28.356 [2024-11-04 13:55:15.211830] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:28.356 [2024-11-04 13:55:15.211848] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.614 00:21:28.614 real 0m0.803s 00:21:28.614 user 0m0.529s 00:21:28.614 sys 0m0.167s 00:21:28.614 13:55:15 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:28.614 13:55:15 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:28.614 ************************************ 00:21:28.614 END TEST bdev_json_nonenclosed 00:21:28.614 ************************************ 00:21:28.873 13:55:15 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.873 13:55:15 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:28.873 13:55:15 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:28.873 13:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:28.873 ************************************ 00:21:28.873 START TEST bdev_json_nonarray 00:21:28.873 ************************************ 00:21:28.873 13:55:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.873 [2024-11-04 13:55:15.665171] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:28.873 [2024-11-04 13:55:15.665367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73065 ] 00:21:29.131 [2024-11-04 13:55:15.844356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.131 [2024-11-04 13:55:15.977985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.131 [2024-11-04 13:55:15.978102] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:29.131 [2024-11-04 13:55:15.978127] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:29.131 [2024-11-04 13:55:15.978141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:29.388 00:21:29.388 real 0m0.694s 00:21:29.388 user 0m0.448s 00:21:29.388 sys 0m0.141s 00:21:29.388 13:55:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:29.388 13:55:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:29.388 ************************************ 00:21:29.388 END TEST bdev_json_nonarray 00:21:29.388 ************************************ 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:29.388 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:29.646 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:29.646 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:29.646 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:29.646 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:29.646 13:55:16 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:29.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.854 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.854 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.854 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.113 00:21:31.113 real 1m5.847s 00:21:31.113 user 1m45.332s 00:21:31.113 sys 0m32.723s 00:21:31.113 13:55:17 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:31.113 13:55:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:31.113 ************************************ 00:21:31.113 END TEST blockdev_xnvme 00:21:31.113 ************************************ 00:21:31.113 13:55:17 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:31.113 13:55:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:31.113 13:55:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:31.113 13:55:17 -- common/autotest_common.sh@10 -- # set +x 00:21:31.113 ************************************ 00:21:31.113 START TEST ublk 00:21:31.113 ************************************ 00:21:31.113 13:55:17 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:31.113 * Looking for test storage... 00:21:31.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:31.113 13:55:17 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:31.113 13:55:17 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:21:31.113 13:55:17 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:31.113 13:55:18 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:31.113 13:55:18 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.113 13:55:18 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.113 13:55:18 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.113 13:55:18 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.113 13:55:18 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.113 13:55:18 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.113 13:55:18 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.113 13:55:18 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.113 13:55:18 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.113 13:55:18 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.113 13:55:18 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.113 13:55:18 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:31.113 13:55:18 ublk -- scripts/common.sh@345 -- # : 1 00:21:31.113 13:55:18 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.113 13:55:18 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.113 13:55:18 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:31.113 13:55:18 ublk -- scripts/common.sh@353 -- # local d=1 00:21:31.113 13:55:18 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.113 13:55:18 ublk -- scripts/common.sh@355 -- # echo 1 00:21:31.373 13:55:18 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.373 13:55:18 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:31.373 13:55:18 ublk -- scripts/common.sh@353 -- # local d=2 00:21:31.373 13:55:18 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.373 13:55:18 ublk -- scripts/common.sh@355 -- # echo 2 00:21:31.373 13:55:18 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.373 13:55:18 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.373 13:55:18 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.373 13:55:18 ublk -- scripts/common.sh@368 -- # return 0 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:31.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.373 --rc genhtml_branch_coverage=1 00:21:31.373 --rc genhtml_function_coverage=1 00:21:31.373 --rc genhtml_legend=1 00:21:31.373 --rc geninfo_all_blocks=1 00:21:31.373 --rc geninfo_unexecuted_blocks=1 00:21:31.373 00:21:31.373 ' 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:31.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.373 --rc genhtml_branch_coverage=1 00:21:31.373 --rc genhtml_function_coverage=1 00:21:31.373 --rc genhtml_legend=1 00:21:31.373 --rc geninfo_all_blocks=1 00:21:31.373 --rc geninfo_unexecuted_blocks=1 00:21:31.373 00:21:31.373 ' 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:31.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.373 --rc genhtml_branch_coverage=1 00:21:31.373 --rc genhtml_function_coverage=1 00:21:31.373 --rc genhtml_legend=1 00:21:31.373 --rc geninfo_all_blocks=1 00:21:31.373 --rc geninfo_unexecuted_blocks=1 00:21:31.373 00:21:31.373 ' 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:31.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.373 --rc genhtml_branch_coverage=1 00:21:31.373 --rc genhtml_function_coverage=1 00:21:31.373 --rc genhtml_legend=1 00:21:31.373 --rc geninfo_all_blocks=1 00:21:31.373 --rc geninfo_unexecuted_blocks=1 00:21:31.373 00:21:31.373 ' 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:31.373 13:55:18 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:31.373 13:55:18 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:31.373 13:55:18 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:31.373 13:55:18 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:31.373 13:55:18 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:31.373 13:55:18 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:31.373 13:55:18 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:31.373 13:55:18 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:31.373 13:55:18 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:31.373 13:55:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.373 ************************************ 00:21:31.373 START TEST test_save_ublk_config 00:21:31.373 ************************************ 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73359 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:31.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73359 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 73359 ']' 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.373 13:55:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:31.373 [2024-11-04 13:55:18.195232] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:31.373 [2024-11-04 13:55:18.195632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73359 ] 00:21:31.632 [2024-11-04 13:55:18.373770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.632 [2024-11-04 13:55:18.505450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:33.008 [2024-11-04 13:55:19.523606] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:33.008 [2024-11-04 13:55:19.524865] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:33.008 malloc0 00:21:33.008 [2024-11-04 13:55:19.619820] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:33.008 [2024-11-04 13:55:19.619961] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:33.008 [2024-11-04 13:55:19.619977] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:33.008 [2024-11-04 13:55:19.619987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:33.008 [2024-11-04 13:55:19.628692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:33.008 [2024-11-04 13:55:19.628741] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:33.008 [2024-11-04 13:55:19.635614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:33.008 [2024-11-04 13:55:19.635787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:33.008 [2024-11-04 13:55:19.652614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:33.008 0 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.008 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:33.267 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.267 13:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:33.267 "subsystems": [ 00:21:33.267 { 00:21:33.267 "subsystem": "fsdev", 00:21:33.267 "config": [ 00:21:33.267 { 00:21:33.267 "method": "fsdev_set_opts", 00:21:33.267 "params": { 00:21:33.267 "fsdev_io_pool_size": 65535, 00:21:33.267 "fsdev_io_cache_size": 256 00:21:33.267 } 00:21:33.267 } 00:21:33.267 ] 00:21:33.267 }, 00:21:33.267 { 00:21:33.267 "subsystem": "keyring", 00:21:33.267 "config": [] 00:21:33.267 }, 00:21:33.267 { 00:21:33.267 "subsystem": "iobuf", 00:21:33.267 "config": [ 00:21:33.267 { 00:21:33.267 "method": "iobuf_set_options", 00:21:33.267 "params": { 00:21:33.267 "small_pool_count": 8192, 00:21:33.267 "large_pool_count": 1024, 00:21:33.267 "small_bufsize": 8192, 00:21:33.267 "large_bufsize": 135168, 00:21:33.267 "enable_numa": false 00:21:33.267 } 00:21:33.267 } 00:21:33.267 ] 00:21:33.267 }, 00:21:33.267 { 00:21:33.267 "subsystem": "sock", 00:21:33.267 "config": [ 00:21:33.267 { 00:21:33.267 "method": "sock_set_default_impl", 00:21:33.267 "params": { 00:21:33.267 "impl_name": "posix" 00:21:33.267 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "sock_impl_set_options", 00:21:33.268 "params": { 00:21:33.268 "impl_name": "ssl", 00:21:33.268 "recv_buf_size": 4096, 00:21:33.268 "send_buf_size": 4096, 00:21:33.268 "enable_recv_pipe": true, 00:21:33.268 "enable_quickack": false, 00:21:33.268 "enable_placement_id": 0, 00:21:33.268 "enable_zerocopy_send_server": true, 00:21:33.268 "enable_zerocopy_send_client": false, 00:21:33.268 "zerocopy_threshold": 0, 00:21:33.268 "tls_version": 0, 00:21:33.268 "enable_ktls": false 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "sock_impl_set_options", 00:21:33.268 "params": { 00:21:33.268 "impl_name": "posix", 00:21:33.268 "recv_buf_size": 2097152, 00:21:33.268 "send_buf_size": 2097152, 00:21:33.268 "enable_recv_pipe": true, 00:21:33.268 "enable_quickack": false, 00:21:33.268 "enable_placement_id": 0, 00:21:33.268 "enable_zerocopy_send_server": true, 00:21:33.268 "enable_zerocopy_send_client": false, 00:21:33.268 "zerocopy_threshold": 0, 00:21:33.268 "tls_version": 0, 00:21:33.268 "enable_ktls": false 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "vmd", 00:21:33.268 "config": [] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "accel", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "accel_set_options", 00:21:33.268 "params": { 00:21:33.268 "small_cache_size": 128, 00:21:33.268 "large_cache_size": 16, 00:21:33.268 "task_count": 2048, 00:21:33.268 "sequence_count": 2048, 00:21:33.268 "buf_count": 2048 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "bdev", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "bdev_set_options", 00:21:33.268 "params": { 00:21:33.268 "bdev_io_pool_size": 65535, 00:21:33.268 "bdev_io_cache_size": 256, 00:21:33.268 "bdev_auto_examine": true, 00:21:33.268 "iobuf_small_cache_size": 128, 00:21:33.268 "iobuf_large_cache_size": 16 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_raid_set_options", 00:21:33.268 "params": { 00:21:33.268 "process_window_size_kb": 1024, 00:21:33.268 "process_max_bandwidth_mb_sec": 0 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_iscsi_set_options", 00:21:33.268 "params": { 00:21:33.268 "timeout_sec": 30 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_nvme_set_options", 00:21:33.268 "params": { 00:21:33.268 "action_on_timeout": "none", 00:21:33.268 "timeout_us": 0, 00:21:33.268 "timeout_admin_us": 0, 00:21:33.268 "keep_alive_timeout_ms": 10000, 00:21:33.268 "arbitration_burst": 0, 00:21:33.268 "low_priority_weight": 0, 00:21:33.268 "medium_priority_weight": 0, 00:21:33.268 "high_priority_weight": 0, 00:21:33.268 "nvme_adminq_poll_period_us": 10000, 00:21:33.268 "nvme_ioq_poll_period_us": 0, 00:21:33.268 "io_queue_requests": 0, 00:21:33.268 "delay_cmd_submit": true, 00:21:33.268 "transport_retry_count": 4, 00:21:33.268 "bdev_retry_count": 3, 00:21:33.268 "transport_ack_timeout": 0, 00:21:33.268 "ctrlr_loss_timeout_sec": 0, 00:21:33.268 "reconnect_delay_sec": 0, 00:21:33.268 "fast_io_fail_timeout_sec": 0, 00:21:33.268 "disable_auto_failback": false, 00:21:33.268 "generate_uuids": false, 00:21:33.268 "transport_tos": 0, 00:21:33.268 "nvme_error_stat": false, 00:21:33.268 "rdma_srq_size": 0, 00:21:33.268 "io_path_stat": false, 00:21:33.268 "allow_accel_sequence": false, 00:21:33.268 "rdma_max_cq_size": 0, 00:21:33.268 "rdma_cm_event_timeout_ms": 0, 00:21:33.268 "dhchap_digests": [ 00:21:33.268 "sha256", 00:21:33.268 "sha384", 00:21:33.268 "sha512" 00:21:33.268 ], 00:21:33.268 "dhchap_dhgroups": [ 00:21:33.268 "null", 00:21:33.268 "ffdhe2048", 00:21:33.268 "ffdhe3072", 00:21:33.268 "ffdhe4096", 00:21:33.268 "ffdhe6144", 00:21:33.268 "ffdhe8192" 00:21:33.268 ] 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_nvme_set_hotplug", 00:21:33.268 "params": { 00:21:33.268 "period_us": 100000, 00:21:33.268 "enable": false 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_malloc_create", 00:21:33.268 "params": { 00:21:33.268 "name": "malloc0", 00:21:33.268 "num_blocks": 8192, 00:21:33.268 "block_size": 4096, 00:21:33.268 "physical_block_size": 4096, 00:21:33.268 "uuid": "c69b91bb-5a22-482a-b321-ebb40cf0f29d", 00:21:33.268 "optimal_io_boundary": 0, 00:21:33.268 "md_size": 0, 00:21:33.268 "dif_type": 0, 00:21:33.268 "dif_is_head_of_md": false, 00:21:33.268 "dif_pi_format": 0 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "bdev_wait_for_examine" 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "scsi", 00:21:33.268 "config": null 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "scheduler", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "framework_set_scheduler", 00:21:33.268 "params": { 00:21:33.268 "name": "static" 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "vhost_scsi", 00:21:33.268 "config": [] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "vhost_blk", 00:21:33.268 "config": [] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "ublk", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "ublk_create_target", 00:21:33.268 "params": { 00:21:33.268 "cpumask": "1" 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "ublk_start_disk", 00:21:33.268 "params": { 00:21:33.268 "bdev_name": "malloc0", 00:21:33.268 "ublk_id": 0, 00:21:33.268 "num_queues": 1, 00:21:33.268 "queue_depth": 128 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "nbd", 00:21:33.268 "config": [] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "nvmf", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "nvmf_set_config", 00:21:33.268 "params": { 00:21:33.268 "discovery_filter": "match_any", 00:21:33.268 "admin_cmd_passthru": { 00:21:33.268 "identify_ctrlr": false 00:21:33.268 }, 00:21:33.268 "dhchap_digests": [ 00:21:33.268 "sha256", 00:21:33.268 "sha384", 00:21:33.268 "sha512" 00:21:33.268 ], 00:21:33.268 "dhchap_dhgroups": [ 00:21:33.268 "null", 00:21:33.268 "ffdhe2048", 00:21:33.268 "ffdhe3072", 00:21:33.268 "ffdhe4096", 00:21:33.268 "ffdhe6144", 00:21:33.268 "ffdhe8192" 00:21:33.268 ] 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "nvmf_set_max_subsystems", 00:21:33.268 "params": { 00:21:33.268 "max_subsystems": 1024 00:21:33.268 } 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "method": "nvmf_set_crdt", 00:21:33.268 "params": { 00:21:33.268 "crdt1": 0, 00:21:33.268 "crdt2": 0, 00:21:33.268 "crdt3": 0 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }, 00:21:33.268 { 00:21:33.268 "subsystem": "iscsi", 00:21:33.268 "config": [ 00:21:33.268 { 00:21:33.268 "method": "iscsi_set_options", 00:21:33.268 "params": { 00:21:33.268 "node_base": "iqn.2016-06.io.spdk", 00:21:33.268 "max_sessions": 128, 00:21:33.268 "max_connections_per_session": 2, 00:21:33.268 "max_queue_depth": 64, 00:21:33.268 "default_time2wait": 2, 00:21:33.268 "default_time2retain": 20, 00:21:33.268 "first_burst_length": 8192, 00:21:33.268 "immediate_data": true, 00:21:33.268 "allow_duplicated_isid": false, 00:21:33.268 "error_recovery_level": 0, 00:21:33.268 "nop_timeout": 60, 00:21:33.268 "nop_in_interval": 30, 00:21:33.268 "disable_chap": false, 00:21:33.268 "require_chap": false, 00:21:33.268 "mutual_chap": false, 00:21:33.268 "chap_group": 0, 00:21:33.268 "max_large_datain_per_connection": 64, 00:21:33.268 "max_r2t_per_connection": 4, 00:21:33.268 "pdu_pool_size": 36864, 00:21:33.268 "immediate_data_pool_size": 16384, 00:21:33.268 "data_out_pool_size": 2048 00:21:33.268 } 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 } 00:21:33.268 ] 00:21:33.268 }' 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73359 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 73359 ']' 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 73359 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.268 13:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73359 00:21:33.268 13:55:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:33.269 killing process with pid 73359 00:21:33.269 13:55:20 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:33.269 13:55:20 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73359' 00:21:33.269 13:55:20 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 73359 00:21:33.269 13:55:20 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 73359 00:21:35.206 [2024-11-04 13:55:21.650457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:35.206 [2024-11-04 13:55:21.680744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:35.206 [2024-11-04 13:55:21.680922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:35.206 [2024-11-04 13:55:21.688653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:35.206 [2024-11-04 13:55:21.688748] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:35.206 [2024-11-04 13:55:21.688768] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:35.206 [2024-11-04 13:55:21.688799] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:35.206 [2024-11-04 13:55:21.688996] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:37.114 13:55:23 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73431 00:21:37.114 13:55:23 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73431 00:21:37.114 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 73431 ']' 00:21:37.114 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.114 13:55:23 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:37.114 "subsystems": [ 00:21:37.114 { 00:21:37.114 "subsystem": "fsdev", 00:21:37.114 "config": [ 00:21:37.114 { 00:21:37.114 "method": "fsdev_set_opts", 00:21:37.114 "params": { 00:21:37.114 "fsdev_io_pool_size": 65535, 00:21:37.114 "fsdev_io_cache_size": 256 00:21:37.114 } 00:21:37.114 } 00:21:37.114 ] 00:21:37.114 }, 00:21:37.114 { 00:21:37.114 "subsystem": "keyring", 00:21:37.114 "config": [] 00:21:37.114 }, 00:21:37.114 { 00:21:37.114 "subsystem": "iobuf", 00:21:37.114 "config": [ 00:21:37.114 { 00:21:37.114 "method": "iobuf_set_options", 00:21:37.114 "params": { 00:21:37.114 "small_pool_count": 8192, 00:21:37.114 "large_pool_count": 1024, 00:21:37.114 "small_bufsize": 8192, 00:21:37.114 "large_bufsize": 135168, 00:21:37.114 "enable_numa": false 00:21:37.114 } 00:21:37.114 } 00:21:37.114 ] 00:21:37.114 }, 00:21:37.114 { 00:21:37.114 "subsystem": "sock", 00:21:37.114 "config": [ 00:21:37.114 { 00:21:37.114 "method": "sock_set_default_impl", 00:21:37.114 "params": { 00:21:37.114 "impl_name": "posix" 00:21:37.114 } 00:21:37.114 }, 00:21:37.114 { 00:21:37.114 "method": "sock_impl_set_options", 00:21:37.114 "params": { 00:21:37.114 "impl_name": "ssl", 00:21:37.114 "recv_buf_size": 4096, 00:21:37.114 "send_buf_size": 4096, 00:21:37.114 "enable_recv_pipe": true, 00:21:37.114 "enable_quickack": false, 00:21:37.114 "enable_placement_id": 0, 00:21:37.115 "enable_zerocopy_send_server": true, 00:21:37.115 "enable_zerocopy_send_client": false, 00:21:37.115 "zerocopy_threshold": 0, 00:21:37.115 "tls_version": 0, 00:21:37.115 "enable_ktls": false 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "sock_impl_set_options", 00:21:37.115 "params": { 00:21:37.115 "impl_name": "posix", 00:21:37.115 "recv_buf_size": 2097152, 00:21:37.115 "send_buf_size": 2097152, 00:21:37.115 "enable_recv_pipe": true, 00:21:37.115 "enable_quickack": false, 00:21:37.115 "enable_placement_id": 0, 00:21:37.115 "enable_zerocopy_send_server": true, 00:21:37.115 "enable_zerocopy_send_client": false, 00:21:37.115 "zerocopy_threshold": 0, 00:21:37.115 "tls_version": 0, 00:21:37.115 "enable_ktls": false 00:21:37.115 } 00:21:37.115 } 00:21:37.115 ] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "vmd", 00:21:37.115 "config": [] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "accel", 00:21:37.115 "config": [ 00:21:37.115 { 00:21:37.115 "method": "accel_set_options", 00:21:37.115 "params": { 00:21:37.115 "small_cache_size": 128, 00:21:37.115 "large_cache_size": 16, 00:21:37.115 "task_count": 2048, 00:21:37.115 "sequence_count": 2048, 00:21:37.115 "buf_count": 2048 00:21:37.115 } 00:21:37.115 } 00:21:37.115 ] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "bdev", 00:21:37.115 "config": [ 00:21:37.115 { 00:21:37.115 "method": "bdev_set_options", 00:21:37.115 "params": { 00:21:37.115 "bdev_io_pool_size": 65535, 00:21:37.115 "bdev_io_cache_size": 256, 00:21:37.115 "bdev_auto_examine": true, 00:21:37.115 "iobuf_small_cache_size": 128, 00:21:37.115 "iobuf_large_cache_size": 16 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_raid_set_options", 00:21:37.115 "params": { 00:21:37.115 "process_window_size_kb": 1024, 00:21:37.115 "process_max_bandwidth_mb_sec": 0 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_iscsi_set_options", 00:21:37.115 "params": { 00:21:37.115 "timeout_sec": 30 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_nvme_set_options", 00:21:37.115 "params": { 00:21:37.115 "action_on_timeout": "none", 00:21:37.115 "timeout_us": 0, 00:21:37.115 "timeout_admin_us": 0, 00:21:37.115 "keep_alive_timeout_ms": 10000, 00:21:37.115 "arbitration_burst": 0, 00:21:37.115 "low_priority_weight": 0, 00:21:37.115 "medium_priority_weight": 0, 00:21:37.115 "high_priority_weight": 0, 00:21:37.115 "nvme_adminq_poll_period_us": 10000, 00:21:37.115 "nvme_ioq_poll_period_us": 0, 00:21:37.115 "io_queue_requests": 0, 00:21:37.115 "delay_cmd_submit": true, 00:21:37.115 "transport_retry_count": 4, 00:21:37.115 "bdev_retry_count": 3, 00:21:37.115 "transport_ack_timeout": 0, 00:21:37.115 "ctrlr_loss_timeout_sec": 0, 00:21:37.115 "reconnect_delay_sec": 0, 00:21:37.115 "fast_io_fail_timeout_sec": 0, 00:21:37.115 "disable_auto_failback": false, 00:21:37.115 "generate_uuids": false, 00:21:37.115 "transport_tos": 0, 00:21:37.115 "nvme_error_stat": false, 00:21:37.115 "rdma_srq_size": 0, 00:21:37.115 "io_path_stat": false, 00:21:37.115 "allow_accel_sequence": false, 00:21:37.115 "rdma_max_cq_size": 0, 00:21:37.115 "rdma_cm_event_timeout_ms": 0, 00:21:37.115 "dhchap_digests": [ 00:21:37.115 "sha256", 00:21:37.115 "sha384", 00:21:37.115 "sha512" 00:21:37.115 ], 00:21:37.115 "dhchap_dhgroups": [ 00:21:37.115 "null", 00:21:37.115 "ffdhe2048", 00:21:37.115 "ffdhe3072", 00:21:37.115 "ffdhe4096", 00:21:37.115 "ffdhe6144", 00:21:37.115 "ffdhe8192" 00:21:37.115 ] 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_nvme_set_hotplug", 00:21:37.115 "params": { 00:21:37.115 "period_us": 100000, 00:21:37.115 "enable": false 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_malloc_create", 00:21:37.115 "params": { 00:21:37.115 "name": "malloc0", 00:21:37.115 "num_blocks": 8192, 00:21:37.115 "block_size": 4096, 00:21:37.115 "physical_block_size": 4096, 00:21:37.115 "uuid": "c69b91bb-5a22-482a-b321-ebb40cf0f29d", 00:21:37.115 "optimal_io_boundary": 0, 00:21:37.115 "md_size": 0, 00:21:37.115 "dif_type": 0, 00:21:37.115 "dif_is_head_of_md": false, 00:21:37.115 "dif_pi_format": 0 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "bdev_wait_for_examine" 00:21:37.115 } 00:21:37.115 ] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "scsi", 00:21:37.115 "config": null 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "scheduler", 00:21:37.115 "config": [ 00:21:37.115 { 00:21:37.115 "method": "framework_set_scheduler", 00:21:37.115 "params": { 00:21:37.115 "name": "static" 00:21:37.115 } 00:21:37.115 } 00:21:37.115 ] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "vhost_scsi", 00:21:37.115 "config": [] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "vhost_blk", 00:21:37.115 "config": [] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "ublk", 00:21:37.115 "config": [ 00:21:37.115 { 00:21:37.115 "method": "ublk_create_target", 00:21:37.115 "params": { 00:21:37.115 "cpumask": "1" 00:21:37.115 } 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "method": "ublk_start_disk", 00:21:37.115 "params": { 00:21:37.115 "bdev_name": "malloc0", 00:21:37.115 "ublk_id": 0, 00:21:37.115 "num_queues": 1, 00:21:37.115 "queue_depth": 128 00:21:37.115 } 00:21:37.115 } 00:21:37.115 ] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "nbd", 00:21:37.115 "config": [] 00:21:37.115 }, 00:21:37.115 { 00:21:37.115 "subsystem": "nvmf", 00:21:37.115 "config": [ 00:21:37.115 { 00:21:37.115 "method": "nvmf_set_config", 00:21:37.115 "params": { 00:21:37.115 "discovery_filter": "match_any", 00:21:37.115 "admin_cmd_passthru": { 00:21:37.115 "identify_ctrlr": false 00:21:37.115 }, 00:21:37.115 "dhchap_digests": [ 00:21:37.115 "sha256", 00:21:37.115 "sha384", 00:21:37.115 "sha512" 00:21:37.115 ], 00:21:37.115 "dhchap_dhgroups": [ 00:21:37.115 "null", 00:21:37.116 "ffdhe2048", 00:21:37.116 "ffdhe3072", 00:21:37.116 "ffdhe4096", 00:21:37.116 "ffdhe6144", 00:21:37.116 "ffdhe81 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:37.116 92" 00:21:37.116 ] 00:21:37.116 } 00:21:37.116 }, 00:21:37.116 { 00:21:37.116 "method": "nvmf_set_max_subsystems", 00:21:37.116 "params": { 00:21:37.116 "max_subsystems": 1024 00:21:37.116 } 00:21:37.116 }, 00:21:37.116 { 00:21:37.116 "method": "nvmf_set_crdt", 00:21:37.116 "params": { 00:21:37.116 "crdt1": 0, 00:21:37.116 "crdt2": 0, 00:21:37.116 "crdt3": 0 00:21:37.116 } 00:21:37.116 } 00:21:37.116 ] 00:21:37.116 }, 00:21:37.116 { 00:21:37.116 "subsystem": "iscsi", 00:21:37.116 "config": [ 00:21:37.116 { 00:21:37.116 "method": "iscsi_set_options", 00:21:37.116 "params": { 00:21:37.116 "node_base": "iqn.2016-06.io.spdk", 00:21:37.116 "max_sessions": 128, 00:21:37.116 "max_connections_per_session": 2, 00:21:37.116 "max_queue_depth": 64, 00:21:37.116 "default_time2wait": 2, 00:21:37.116 "default_time2retain": 20, 00:21:37.116 "first_burst_length": 8192, 00:21:37.116 "immediate_data": true, 00:21:37.116 "allow_duplicated_isid": false, 00:21:37.116 "error_recovery_level": 0, 00:21:37.116 "nop_timeout": 60, 00:21:37.116 "nop_in_interval": 30, 00:21:37.116 "disable_chap": false, 00:21:37.116 "require_chap": false, 00:21:37.116 "mutual_chap": false, 00:21:37.116 "chap_group": 0, 00:21:37.116 "max_large_datain_per_connection": 64, 00:21:37.116 "max_r2t_per_connection": 4, 00:21:37.116 "pdu_pool_size": 36864, 00:21:37.116 "immediate_data_pool_size": 16384, 00:21:37.116 "data_out_pool_size": 2048 00:21:37.116 } 00:21:37.116 } 00:21:37.116 ] 00:21:37.116 } 00:21:37.116 ] 00:21:37.116 }' 00:21:37.116 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.116 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:37.116 13:55:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:37.116 13:55:23 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:37.116 [2024-11-04 13:55:23.931542] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:37.116 [2024-11-04 13:55:23.931922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73431 ] 00:21:37.374 [2024-11-04 13:55:24.133506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.374 [2024-11-04 13:55:24.264853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.796 [2024-11-04 13:55:25.443623] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:38.796 [2024-11-04 13:55:25.444996] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:38.796 [2024-11-04 13:55:25.451844] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:38.796 [2024-11-04 13:55:25.451976] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:38.796 [2024-11-04 13:55:25.451995] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:38.796 [2024-11-04 13:55:25.452007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:38.796 [2024-11-04 13:55:25.459837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:38.796 [2024-11-04 13:55:25.459881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:38.796 [2024-11-04 13:55:25.467626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:38.796 [2024-11-04 13:55:25.467770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:38.796 [2024-11-04 13:55:25.484608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73431 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 73431 ']' 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 73431 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73431 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:38.796 killing process with pid 73431 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73431' 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 73431 00:21:38.796 13:55:25 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 73431 00:21:41.328 [2024-11-04 13:55:27.757198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:41.328 [2024-11-04 13:55:27.789711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:41.328 [2024-11-04 13:55:27.789923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:41.328 [2024-11-04 13:55:27.797654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:41.328 [2024-11-04 13:55:27.797744] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:41.328 [2024-11-04 13:55:27.797757] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:41.328 [2024-11-04 13:55:27.797793] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:41.328 [2024-11-04 13:55:27.798018] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:43.231 13:55:30 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:43.232 00:21:43.232 real 0m12.010s 00:21:43.232 user 0m9.445s 00:21:43.232 sys 0m3.543s 00:21:43.232 13:55:30 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:43.232 13:55:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:43.232 ************************************ 00:21:43.232 END TEST test_save_ublk_config 00:21:43.232 ************************************ 00:21:43.232 13:55:30 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73529 00:21:43.232 13:55:30 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.232 13:55:30 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:43.232 13:55:30 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73529 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@833 -- # '[' -z 73529 ']' 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:43.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:43.232 13:55:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:43.584 [2024-11-04 13:55:30.281415] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:21:43.584 [2024-11-04 13:55:30.281622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73529 ] 00:21:43.584 [2024-11-04 13:55:30.477785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:43.842 [2024-11-04 13:55:30.617693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.842 [2024-11-04 13:55:30.617740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.777 13:55:31 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:44.777 13:55:31 ublk -- common/autotest_common.sh@866 -- # return 0 00:21:44.777 13:55:31 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:44.777 13:55:31 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:44.777 13:55:31 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:44.777 13:55:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:44.777 ************************************ 00:21:44.777 START TEST test_create_ublk 00:21:44.777 ************************************ 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:21:44.777 13:55:31 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:44.777 [2024-11-04 13:55:31.670602] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:44.777 [2024-11-04 13:55:31.673913] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.777 13:55:31 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:44.777 13:55:31 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.777 13:55:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:45.344 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.344 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:45.344 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:45.344 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.344 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:45.344 [2024-11-04 13:55:32.013874] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:45.344 [2024-11-04 13:55:32.014451] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:45.344 [2024-11-04 13:55:32.014473] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:45.344 [2024-11-04 13:55:32.014484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:45.344 [2024-11-04 13:55:32.022963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:45.344 [2024-11-04 13:55:32.023016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:45.344 [2024-11-04 13:55:32.029673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:45.344 [2024-11-04 13:55:32.038726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:45.344 [2024-11-04 13:55:32.053803] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:45.344 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.344 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:45.344 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:45.344 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:45.345 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.345 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:45.345 13:55:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:45.345 { 00:21:45.345 "ublk_device": "/dev/ublkb0", 00:21:45.345 "id": 0, 00:21:45.345 "queue_depth": 512, 00:21:45.345 "num_queues": 4, 00:21:45.345 "bdev_name": "Malloc0" 00:21:45.345 } 00:21:45.345 ]' 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:45.345 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:45.603 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:45.603 13:55:32 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:45.603 13:55:32 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:45.603 fio: verification read phase will never start because write phase uses all of runtime 00:21:45.603 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:45.603 fio-3.35 00:21:45.603 Starting 1 process 00:21:55.622 00:21:55.622 fio_test: (groupid=0, jobs=1): err= 0: pid=73588: Mon Nov 4 13:55:42 2024 00:21:55.622 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(435MiB/10000msec); 0 zone resets 00:21:55.622 clat (usec): min=53, max=8018, avg=88.59, stdev=154.42 00:21:55.622 lat (usec): min=54, max=8025, avg=89.34, stdev=154.56 00:21:55.622 clat percentiles (usec): 00:21:55.622 | 1.00th=[ 65], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:21:55.622 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 81], 00:21:55.622 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 95], 95.00th=[ 104], 00:21:55.622 | 99.00th=[ 127], 99.50th=[ 143], 99.90th=[ 3359], 99.95th=[ 3556], 00:21:55.622 | 99.99th=[ 3818] 00:21:55.622 bw ( KiB/s): min=19768, max=48560, per=99.45%, avg=44263.58, stdev=6227.02, samples=19 00:21:55.622 iops : min= 4942, max=12140, avg=11066.00, stdev=1556.78, samples=19 00:21:55.622 lat (usec) : 100=93.41%, 250=6.25%, 500=0.01%, 750=0.02%, 1000=0.02% 00:21:55.622 lat (msec) : 2=0.07%, 4=0.21%, 10=0.01% 00:21:55.622 cpu : usr=3.66%, sys=10.20%, ctx=111269, majf=0, minf=798 00:21:55.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:55.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.622 issued rwts: total=0,111266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:55.622 00:21:55.622 Run status group 0 (all jobs): 00:21:55.622 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=435MiB (456MB), run=10000-10000msec 00:21:55.622 00:21:55.622 Disk stats (read/write): 00:21:55.622 ublkb0: ios=0/110002, merge=0/0, ticks=0/8573, in_queue=8574, util=99.07% 00:21:55.622 13:55:42 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:21:55.622 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.622 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:55.622 [2024-11-04 13:55:42.533107] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:55.880 [2024-11-04 13:55:42.577668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:55.880 [2024-11-04 13:55:42.578506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:55.880 [2024-11-04 13:55:42.585804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:55.880 [2024-11-04 13:55:42.586292] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:55.880 [2024-11-04 13:55:42.586411] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.880 13:55:42 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:55.880 [2024-11-04 13:55:42.609756] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:21:55.880 request: 00:21:55.880 { 00:21:55.880 "ublk_id": 0, 00:21:55.880 "method": "ublk_stop_disk", 00:21:55.880 "req_id": 1 00:21:55.880 } 00:21:55.880 Got JSON-RPC error response 00:21:55.880 response: 00:21:55.880 { 00:21:55.880 "code": -19, 00:21:55.880 "message": "No such device" 00:21:55.880 } 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.880 13:55:42 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:55.880 [2024-11-04 13:55:42.625751] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:55.880 [2024-11-04 13:55:42.633868] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:55.880 [2024-11-04 13:55:42.633960] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.880 13:55:42 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.880 13:55:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.816 13:55:43 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.816 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:21:56.816 ************************************ 00:21:56.816 END TEST test_create_ublk 00:21:56.816 ************************************ 00:21:56.816 13:55:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:56.816 00:21:56.816 real 0m11.925s 00:21:56.816 user 0m0.746s 00:21:56.817 sys 0m1.142s 00:21:56.817 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:56.817 13:55:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.817 13:55:43 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:21:56.817 13:55:43 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:56.817 13:55:43 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:56.817 13:55:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.817 ************************************ 00:21:56.817 START TEST test_create_multi_ublk 00:21:56.817 ************************************ 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.817 [2024-11-04 13:55:43.650603] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:56.817 [2024-11-04 13:55:43.653467] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.817 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.076 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.076 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:21:57.076 13:55:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:57.076 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.076 13:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.076 [2024-11-04 13:55:43.971803] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:57.076 [2024-11-04 13:55:43.972342] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:57.076 [2024-11-04 13:55:43.972362] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:57.076 [2024-11-04 13:55:43.972378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:57.076 [2024-11-04 13:55:43.978591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:57.076 [2024-11-04 13:55:43.978627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:57.076 [2024-11-04 13:55:43.986622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:57.076 [2024-11-04 13:55:43.987363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:57.335 [2024-11-04 13:55:44.004700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.335 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.616 [2024-11-04 13:55:44.343798] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:21:57.616 [2024-11-04 13:55:44.344329] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:21:57.616 [2024-11-04 13:55:44.344352] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:57.616 [2024-11-04 13:55:44.344362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:57.616 [2024-11-04 13:55:44.352944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:57.616 [2024-11-04 13:55:44.352985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:57.616 [2024-11-04 13:55:44.359642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:57.616 [2024-11-04 13:55:44.360408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:57.616 [2024-11-04 13:55:44.365441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.616 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.875 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.876 [2024-11-04 13:55:44.727782] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:21:57.876 [2024-11-04 13:55:44.728342] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:21:57.876 [2024-11-04 13:55:44.728363] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:21:57.876 [2024-11-04 13:55:44.728376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:21:57.876 [2024-11-04 13:55:44.735657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:57.876 [2024-11-04 13:55:44.735711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:57.876 [2024-11-04 13:55:44.743645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:57.876 [2024-11-04 13:55:44.744437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:21:57.876 [2024-11-04 13:55:44.764640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.876 13:55:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 [2024-11-04 13:55:45.125847] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:21:58.443 [2024-11-04 13:55:45.126388] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:21:58.443 [2024-11-04 13:55:45.126411] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:21:58.443 [2024-11-04 13:55:45.126422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:21:58.443 [2024-11-04 13:55:45.133683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:58.443 [2024-11-04 13:55:45.133723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:58.443 [2024-11-04 13:55:45.141640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:58.443 [2024-11-04 13:55:45.142491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:21:58.443 [2024-11-04 13:55:45.145929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:21:58.443 { 00:21:58.443 "ublk_device": "/dev/ublkb0", 00:21:58.443 "id": 0, 00:21:58.443 "queue_depth": 512, 00:21:58.443 "num_queues": 4, 00:21:58.443 "bdev_name": "Malloc0" 00:21:58.443 }, 00:21:58.443 { 00:21:58.443 "ublk_device": "/dev/ublkb1", 00:21:58.443 "id": 1, 00:21:58.443 "queue_depth": 512, 00:21:58.443 "num_queues": 4, 00:21:58.443 "bdev_name": "Malloc1" 00:21:58.443 }, 00:21:58.443 { 00:21:58.443 "ublk_device": "/dev/ublkb2", 00:21:58.443 "id": 2, 00:21:58.443 "queue_depth": 512, 00:21:58.443 "num_queues": 4, 00:21:58.443 "bdev_name": "Malloc2" 00:21:58.443 }, 00:21:58.443 { 00:21:58.443 "ublk_device": "/dev/ublkb3", 00:21:58.443 "id": 3, 00:21:58.443 "queue_depth": 512, 00:21:58.443 "num_queues": 4, 00:21:58.443 "bdev_name": "Malloc3" 00:21:58.443 } 00:21:58.443 ]' 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:58.443 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:58.703 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:58.962 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:59.220 13:55:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.220 [2024-11-04 13:55:46.051873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:59.220 [2024-11-04 13:55:46.086689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:59.220 [2024-11-04 13:55:46.087793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:59.220 [2024-11-04 13:55:46.093611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:59.220 [2024-11-04 13:55:46.094047] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:59.220 [2024-11-04 13:55:46.094071] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.220 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.220 [2024-11-04 13:55:46.100793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:59.479 [2024-11-04 13:55:46.146674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:59.479 [2024-11-04 13:55:46.147728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:59.479 [2024-11-04 13:55:46.156694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:59.479 [2024-11-04 13:55:46.157099] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:59.479 [2024-11-04 13:55:46.157124] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 [2024-11-04 13:55:46.171826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:59.479 [2024-11-04 13:55:46.210667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:59.479 [2024-11-04 13:55:46.211833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:59.479 [2024-11-04 13:55:46.218653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:59.479 [2024-11-04 13:55:46.219000] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:59.479 [2024-11-04 13:55:46.219022] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 [2024-11-04 13:55:46.234754] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:59.479 [2024-11-04 13:55:46.268166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:59.479 [2024-11-04 13:55:46.269222] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:59.479 [2024-11-04 13:55:46.277656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:59.479 [2024-11-04 13:55:46.278030] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:59.479 [2024-11-04 13:55:46.278053] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.479 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:59.738 [2024-11-04 13:55:46.571717] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:59.738 [2024-11-04 13:55:46.579964] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:59.738 [2024-11-04 13:55:46.580031] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:59.738 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:59.738 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:59.738 13:55:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:59.738 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.738 13:55:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.677 13:55:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:00.677 13:55:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:00.677 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.677 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:00.936 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.936 13:55:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:00.936 13:55:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:00.936 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.936 13:55:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:01.504 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.504 13:55:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:01.504 13:55:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:01.504 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.504 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:01.761 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:02.020 ************************************ 00:22:02.020 END TEST test_create_multi_ublk 00:22:02.020 ************************************ 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:02.020 00:22:02.020 real 0m5.145s 00:22:02.020 user 0m1.103s 00:22:02.020 sys 0m0.228s 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.020 13:55:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.020 13:55:48 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:02.020 13:55:48 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:02.020 13:55:48 ublk -- ublk/ublk.sh@130 -- # killprocess 73529 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@952 -- # '[' -z 73529 ']' 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@956 -- # kill -0 73529 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@957 -- # uname 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73529 00:22:02.020 killing process with pid 73529 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73529' 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@971 -- # kill 73529 00:22:02.020 13:55:48 ublk -- common/autotest_common.sh@976 -- # wait 73529 00:22:03.395 [2024-11-04 13:55:50.233104] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:03.395 [2024-11-04 13:55:50.233212] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:05.295 00:22:05.295 real 0m33.834s 00:22:05.295 user 0m48.417s 00:22:05.296 sys 0m10.759s 00:22:05.296 13:55:51 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.296 13:55:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 ************************************ 00:22:05.296 END TEST ublk 00:22:05.296 ************************************ 00:22:05.296 13:55:51 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:05.296 13:55:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:05.296 13:55:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:05.296 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 ************************************ 00:22:05.296 START TEST ublk_recovery 00:22:05.296 ************************************ 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:05.296 * Looking for test storage... 00:22:05.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.296 13:55:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.296 --rc genhtml_branch_coverage=1 00:22:05.296 --rc genhtml_function_coverage=1 00:22:05.296 --rc genhtml_legend=1 00:22:05.296 --rc geninfo_all_blocks=1 00:22:05.296 --rc geninfo_unexecuted_blocks=1 00:22:05.296 00:22:05.296 ' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.296 --rc genhtml_branch_coverage=1 00:22:05.296 --rc genhtml_function_coverage=1 00:22:05.296 --rc genhtml_legend=1 00:22:05.296 --rc geninfo_all_blocks=1 00:22:05.296 --rc geninfo_unexecuted_blocks=1 00:22:05.296 00:22:05.296 ' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.296 --rc genhtml_branch_coverage=1 00:22:05.296 --rc genhtml_function_coverage=1 00:22:05.296 --rc genhtml_legend=1 00:22:05.296 --rc geninfo_all_blocks=1 00:22:05.296 --rc geninfo_unexecuted_blocks=1 00:22:05.296 00:22:05.296 ' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.296 --rc genhtml_branch_coverage=1 00:22:05.296 --rc genhtml_function_coverage=1 00:22:05.296 --rc genhtml_legend=1 00:22:05.296 --rc geninfo_all_blocks=1 00:22:05.296 --rc geninfo_unexecuted_blocks=1 00:22:05.296 00:22:05.296 ' 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:05.296 13:55:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:05.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73972 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.296 13:55:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73972 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73972 ']' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.296 13:55:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 [2024-11-04 13:55:52.084071] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:22:05.296 [2024-11-04 13:55:52.084283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73972 ] 00:22:05.555 [2024-11-04 13:55:52.290129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:05.555 [2024-11-04 13:55:52.428251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.555 [2024-11-04 13:55:52.428287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.491 13:55:53 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:06.491 13:55:53 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:22:06.491 13:55:53 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:06.491 13:55:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.491 13:55:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.750 [2024-11-04 13:55:53.415606] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:06.750 [2024-11-04 13:55:53.418541] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.750 13:55:53 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.750 malloc0 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.750 13:55:53 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.750 [2024-11-04 13:55:53.589838] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:06.750 [2024-11-04 13:55:53.589979] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:06.750 [2024-11-04 13:55:53.589996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:06.750 [2024-11-04 13:55:53.590008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:06.750 [2024-11-04 13:55:53.598737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:06.750 [2024-11-04 13:55:53.598771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:06.750 [2024-11-04 13:55:53.605624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:06.750 [2024-11-04 13:55:53.605808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:06.750 [2024-11-04 13:55:53.622629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:06.750 1 00:22:06.750 13:55:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.750 13:55:53 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:08.127 13:55:54 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74007 00:22:08.127 13:55:54 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:08.127 13:55:54 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:08.127 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:08.127 fio-3.35 00:22:08.127 Starting 1 process 00:22:13.399 13:55:59 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73972 00:22:13.399 13:55:59 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:18.674 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73972 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:18.674 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74118 00:22:18.674 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:18.674 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.674 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74118 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 74118 ']' 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:18.674 13:56:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.674 [2024-11-04 13:56:04.755672] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:22:18.674 [2024-11-04 13:56:04.756018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74118 ] 00:22:18.675 [2024-11-04 13:56:04.929103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.675 [2024-11-04 13:56:05.064573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.675 [2024-11-04 13:56:05.064606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:22:19.241 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.241 [2024-11-04 13:56:06.093616] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:19.241 [2024-11-04 13:56:06.096603] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.241 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.241 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.500 malloc0 00:22:19.500 13:56:06 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.500 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:19.500 13:56:06 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.500 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.500 [2024-11-04 13:56:06.273815] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:19.500 [2024-11-04 13:56:06.273873] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:19.500 [2024-11-04 13:56:06.273888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:19.500 [2024-11-04 13:56:06.281672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:19.500 [2024-11-04 13:56:06.281710] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:19.500 [2024-11-04 13:56:06.281723] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:19.500 [2024-11-04 13:56:06.281837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:19.500 1 00:22:19.500 13:56:06 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.500 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74007 00:22:19.500 [2024-11-04 13:56:06.289616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:19.500 [2024-11-04 13:56:06.297222] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:19.500 [2024-11-04 13:56:06.304875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:19.500 [2024-11-04 13:56:06.304918] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:15.750 00:23:15.750 fio_test: (groupid=0, jobs=1): err= 0: pid=74015: Mon Nov 4 13:56:54 2024 00:23:15.750 read: IOPS=17.1k, BW=66.8MiB/s (70.0MB/s)(4007MiB/60002msec) 00:23:15.750 slat (usec): min=2, max=959, avg= 7.39, stdev= 3.01 00:23:15.750 clat (usec): min=1047, max=6675.2k, avg=3643.29, stdev=50158.57 00:23:15.750 lat (usec): min=1054, max=6675.3k, avg=3650.68, stdev=50158.56 00:23:15.750 clat percentiles (usec): 00:23:15.750 | 1.00th=[ 2409], 5.00th=[ 2671], 10.00th=[ 2769], 20.00th=[ 2868], 00:23:15.750 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:23:15.750 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3884], 95.00th=[ 4752], 00:23:15.750 | 99.00th=[ 6718], 99.50th=[ 8094], 99.90th=[11207], 99.95th=[13042], 00:23:15.750 | 99.99th=[13960] 00:23:15.750 bw ( KiB/s): min= 3960, max=91944, per=100.00%, avg=76016.01, stdev=11307.04, samples=107 00:23:15.750 iops : min= 990, max=22986, avg=19003.98, stdev=2826.75, samples=107 00:23:15.750 write: IOPS=17.1k, BW=66.7MiB/s (69.9MB/s)(4002MiB/60002msec); 0 zone resets 00:23:15.750 slat (usec): min=2, max=941, avg= 7.54, stdev= 3.11 00:23:15.750 clat (usec): min=1037, max=6675.4k, avg=3831.92, stdev=55141.91 00:23:15.750 lat (usec): min=1044, max=6675.4k, avg=3839.46, stdev=55141.90 00:23:15.750 clat percentiles (usec): 00:23:15.750 | 1.00th=[ 2442], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 2999], 00:23:15.750 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3163], 60.00th=[ 3228], 00:23:15.750 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 3982], 95.00th=[ 4752], 00:23:15.750 | 99.00th=[ 6849], 99.50th=[ 8160], 99.90th=[11076], 99.95th=[13042], 00:23:15.750 | 99.99th=[14091] 00:23:15.750 bw ( KiB/s): min= 4264, max=90392, per=100.00%, avg=75902.79, stdev=11211.85, samples=107 00:23:15.750 iops : min= 1066, max=22598, avg=18975.67, stdev=2802.96, samples=107 00:23:15.750 lat (msec) : 2=0.07%, 4=90.58%, 10=9.18%, 20=0.17%, >=2000=0.01% 00:23:15.750 cpu : usr=10.30%, sys=25.18%, ctx=71726, majf=0, minf=14 00:23:15.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:15.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:15.750 issued rwts: total=1025857,1024418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:15.750 00:23:15.750 Run status group 0 (all jobs): 00:23:15.750 READ: bw=66.8MiB/s (70.0MB/s), 66.8MiB/s-66.8MiB/s (70.0MB/s-70.0MB/s), io=4007MiB (4202MB), run=60002-60002msec 00:23:15.750 WRITE: bw=66.7MiB/s (69.9MB/s), 66.7MiB/s-66.7MiB/s (69.9MB/s-69.9MB/s), io=4002MiB (4196MB), run=60002-60002msec 00:23:15.750 00:23:15.750 Disk stats (read/write): 00:23:15.750 ublkb1: ios=1023660/1022226, merge=0/0, ticks=3632866/3695069, in_queue=7327936, util=99.94% 00:23:15.750 13:56:54 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 [2024-11-04 13:56:54.910765] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:15.750 [2024-11-04 13:56:54.949752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:15.750 [2024-11-04 13:56:54.950218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:15.750 [2024-11-04 13:56:54.958663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:15.750 [2024-11-04 13:56:54.958952] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:15.750 [2024-11-04 13:56:54.959121] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.750 13:56:54 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 [2024-11-04 13:56:54.972761] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:15.750 [2024-11-04 13:56:54.981629] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:15.750 [2024-11-04 13:56:54.981702] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.750 13:56:54 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:15.750 13:56:54 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:15.750 13:56:54 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74118 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 74118 ']' 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 74118 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:15.750 13:56:54 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74118 00:23:15.750 killing process with pid 74118 00:23:15.750 13:56:55 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:15.750 13:56:55 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:15.750 13:56:55 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74118' 00:23:15.750 13:56:55 ublk_recovery -- common/autotest_common.sh@971 -- # kill 74118 00:23:15.750 13:56:55 ublk_recovery -- common/autotest_common.sh@976 -- # wait 74118 00:23:15.750 [2024-11-04 13:56:56.896941] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:15.750 [2024-11-04 13:56:56.897267] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:15.750 ************************************ 00:23:15.750 END TEST ublk_recovery 00:23:15.750 ************************************ 00:23:15.750 00:23:15.750 real 1m6.867s 00:23:15.750 user 1m49.333s 00:23:15.750 sys 0m34.727s 00:23:15.750 13:56:58 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.750 13:56:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 13:56:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:23:15.750 13:56:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.750 13:56:58 -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 13:56:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:23:15.750 13:56:58 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:15.750 13:56:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:15.750 13:56:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.750 13:56:58 -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 ************************************ 00:23:15.750 START TEST ftl 00:23:15.750 ************************************ 00:23:15.750 13:56:58 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:15.750 * Looking for test storage... 00:23:15.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.750 13:56:58 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.750 13:56:58 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.750 13:56:58 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.750 13:56:58 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.750 13:56:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.750 13:56:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.751 13:56:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.751 13:56:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.751 13:56:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.751 13:56:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.751 13:56:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.751 13:56:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:15.751 13:56:58 ftl -- scripts/common.sh@345 -- # : 1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.751 13:56:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.751 13:56:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@353 -- # local d=1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.751 13:56:58 ftl -- scripts/common.sh@355 -- # echo 1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.751 13:56:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@353 -- # local d=2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.751 13:56:58 ftl -- scripts/common.sh@355 -- # echo 2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.751 13:56:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.751 13:56:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.751 13:56:58 ftl -- scripts/common.sh@368 -- # return 0 00:23:15.751 13:56:58 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.751 13:56:58 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.751 --rc genhtml_branch_coverage=1 00:23:15.751 --rc genhtml_function_coverage=1 00:23:15.751 --rc genhtml_legend=1 00:23:15.751 --rc geninfo_all_blocks=1 00:23:15.751 --rc geninfo_unexecuted_blocks=1 00:23:15.751 00:23:15.751 ' 00:23:15.751 13:56:58 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.751 --rc genhtml_branch_coverage=1 00:23:15.751 --rc genhtml_function_coverage=1 00:23:15.751 --rc genhtml_legend=1 00:23:15.751 --rc geninfo_all_blocks=1 00:23:15.751 --rc geninfo_unexecuted_blocks=1 00:23:15.751 00:23:15.751 ' 00:23:15.751 13:56:58 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.751 --rc genhtml_branch_coverage=1 00:23:15.751 --rc genhtml_function_coverage=1 00:23:15.751 --rc genhtml_legend=1 00:23:15.751 --rc geninfo_all_blocks=1 00:23:15.751 --rc geninfo_unexecuted_blocks=1 00:23:15.751 00:23:15.751 ' 00:23:15.751 13:56:58 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.751 --rc genhtml_branch_coverage=1 00:23:15.751 --rc genhtml_function_coverage=1 00:23:15.751 --rc genhtml_legend=1 00:23:15.751 --rc geninfo_all_blocks=1 00:23:15.751 --rc geninfo_unexecuted_blocks=1 00:23:15.751 00:23:15.751 ' 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:15.751 13:56:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:15.751 13:56:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.751 13:56:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.751 13:56:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:15.751 13:56:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:15.751 13:56:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.751 13:56:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.751 13:56:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.751 13:56:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.751 13:56:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.751 13:56:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:15.751 13:56:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:15.751 13:56:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.751 13:56:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.751 13:56:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:15.751 13:56:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.751 13:56:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.751 13:56:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.751 13:56:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.751 13:56:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:15.751 13:56:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:15.751 13:56:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.751 13:56:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:15.751 13:56:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:15.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:15.751 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:15.751 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:15.751 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:15.751 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:15.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.751 13:56:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74920 00:23:15.751 13:56:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:15.751 13:56:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74920 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@833 -- # '[' -z 74920 ']' 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.751 13:56:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:15.751 [2024-11-04 13:56:59.725689] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:23:15.751 [2024-11-04 13:56:59.726246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74920 ] 00:23:15.751 [2024-11-04 13:56:59.941054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.751 [2024-11-04 13:57:00.194894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.751 13:57:00 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:15.751 13:57:00 ftl -- common/autotest_common.sh@866 -- # return 0 00:23:15.751 13:57:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:15.751 13:57:01 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:15.751 13:57:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:15.751 13:57:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:16.318 13:57:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:16.318 13:57:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:16.318 13:57:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@50 -- # break 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:16.576 13:57:03 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:16.835 13:57:03 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:16.835 13:57:03 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:16.835 13:57:03 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:16.835 13:57:03 ftl -- ftl/ftl.sh@63 -- # break 00:23:16.835 13:57:03 ftl -- ftl/ftl.sh@66 -- # killprocess 74920 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@952 -- # '[' -z 74920 ']' 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@956 -- # kill -0 74920 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@957 -- # uname 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74920 00:23:16.835 killing process with pid 74920 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74920' 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@971 -- # kill 74920 00:23:16.835 13:57:03 ftl -- common/autotest_common.sh@976 -- # wait 74920 00:23:20.121 13:57:06 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:20.121 13:57:06 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:20.121 13:57:06 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:20.121 13:57:06 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:20.121 13:57:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:20.121 ************************************ 00:23:20.121 START TEST ftl_fio_basic 00:23:20.121 ************************************ 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:20.121 * Looking for test storage... 00:23:20.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:20.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.121 --rc genhtml_branch_coverage=1 00:23:20.121 --rc genhtml_function_coverage=1 00:23:20.121 --rc genhtml_legend=1 00:23:20.121 --rc geninfo_all_blocks=1 00:23:20.121 --rc geninfo_unexecuted_blocks=1 00:23:20.121 00:23:20.121 ' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:20.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.121 --rc genhtml_branch_coverage=1 00:23:20.121 --rc genhtml_function_coverage=1 00:23:20.121 --rc genhtml_legend=1 00:23:20.121 --rc geninfo_all_blocks=1 00:23:20.121 --rc geninfo_unexecuted_blocks=1 00:23:20.121 00:23:20.121 ' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:20.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.121 --rc genhtml_branch_coverage=1 00:23:20.121 --rc genhtml_function_coverage=1 00:23:20.121 --rc genhtml_legend=1 00:23:20.121 --rc geninfo_all_blocks=1 00:23:20.121 --rc geninfo_unexecuted_blocks=1 00:23:20.121 00:23:20.121 ' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:20.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.121 --rc genhtml_branch_coverage=1 00:23:20.121 --rc genhtml_function_coverage=1 00:23:20.121 --rc genhtml_legend=1 00:23:20.121 --rc geninfo_all_blocks=1 00:23:20.121 --rc geninfo_unexecuted_blocks=1 00:23:20.121 00:23:20.121 ' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:20.121 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75080 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75080 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 75080 ']' 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.122 13:57:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:20.122 [2024-11-04 13:57:06.914862] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:23:20.122 [2024-11-04 13:57:06.916056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75080 ] 00:23:20.380 [2024-11-04 13:57:07.113130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.380 [2024-11-04 13:57:07.257681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.380 [2024-11-04 13:57:07.257783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.380 [2024-11-04 13:57:07.257813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:21.754 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:23:22.012 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:22.270 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:22.270 { 00:23:22.270 "name": "nvme0n1", 00:23:22.270 "aliases": [ 00:23:22.270 "9e710fc7-6586-4c46-8dca-bbd248eceadc" 00:23:22.270 ], 00:23:22.270 "product_name": "NVMe disk", 00:23:22.270 "block_size": 4096, 00:23:22.270 "num_blocks": 1310720, 00:23:22.270 "uuid": "9e710fc7-6586-4c46-8dca-bbd248eceadc", 00:23:22.270 "numa_id": -1, 00:23:22.270 "assigned_rate_limits": { 00:23:22.270 "rw_ios_per_sec": 0, 00:23:22.270 "rw_mbytes_per_sec": 0, 00:23:22.270 "r_mbytes_per_sec": 0, 00:23:22.270 "w_mbytes_per_sec": 0 00:23:22.270 }, 00:23:22.270 "claimed": false, 00:23:22.270 "zoned": false, 00:23:22.270 "supported_io_types": { 00:23:22.270 "read": true, 00:23:22.270 "write": true, 00:23:22.270 "unmap": true, 00:23:22.270 "flush": true, 00:23:22.270 "reset": true, 00:23:22.270 "nvme_admin": true, 00:23:22.270 "nvme_io": true, 00:23:22.270 "nvme_io_md": false, 00:23:22.270 "write_zeroes": true, 00:23:22.270 "zcopy": false, 00:23:22.270 "get_zone_info": false, 00:23:22.270 "zone_management": false, 00:23:22.270 "zone_append": false, 00:23:22.270 "compare": true, 00:23:22.270 "compare_and_write": false, 00:23:22.270 "abort": true, 00:23:22.270 "seek_hole": false, 00:23:22.270 "seek_data": false, 00:23:22.270 "copy": true, 00:23:22.270 "nvme_iov_md": false 00:23:22.270 }, 00:23:22.270 "driver_specific": { 00:23:22.270 "nvme": [ 00:23:22.270 { 00:23:22.270 "pci_address": "0000:00:11.0", 00:23:22.270 "trid": { 00:23:22.270 "trtype": "PCIe", 00:23:22.270 "traddr": "0000:00:11.0" 00:23:22.270 }, 00:23:22.270 "ctrlr_data": { 00:23:22.270 "cntlid": 0, 00:23:22.270 "vendor_id": "0x1b36", 00:23:22.270 "model_number": "QEMU NVMe Ctrl", 00:23:22.270 "serial_number": "12341", 00:23:22.270 "firmware_revision": "8.0.0", 00:23:22.270 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:22.270 "oacs": { 00:23:22.270 "security": 0, 00:23:22.270 "format": 1, 00:23:22.270 "firmware": 0, 00:23:22.270 "ns_manage": 1 00:23:22.270 }, 00:23:22.270 "multi_ctrlr": false, 00:23:22.270 "ana_reporting": false 00:23:22.270 }, 00:23:22.270 "vs": { 00:23:22.270 "nvme_version": "1.4" 00:23:22.270 }, 00:23:22.270 "ns_data": { 00:23:22.270 "id": 1, 00:23:22.270 "can_share": false 00:23:22.270 } 00:23:22.270 } 00:23:22.270 ], 00:23:22.270 "mp_policy": "active_passive" 00:23:22.270 } 00:23:22.270 } 00:23:22.270 ]' 00:23:22.270 13:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:22.270 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:22.529 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:22.529 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:22.787 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5514005a-dfb3-4dff-b73e-7b02ab119f23 00:23:22.787 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5514005a-dfb3-4dff-b73e-7b02ab119f23 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:23:23.046 13:57:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.305 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:23.305 { 00:23:23.305 "name": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:23.305 "aliases": [ 00:23:23.305 "lvs/nvme0n1p0" 00:23:23.305 ], 00:23:23.305 "product_name": "Logical Volume", 00:23:23.305 "block_size": 4096, 00:23:23.305 "num_blocks": 26476544, 00:23:23.305 "uuid": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:23.305 "assigned_rate_limits": { 00:23:23.305 "rw_ios_per_sec": 0, 00:23:23.305 "rw_mbytes_per_sec": 0, 00:23:23.305 "r_mbytes_per_sec": 0, 00:23:23.305 "w_mbytes_per_sec": 0 00:23:23.305 }, 00:23:23.305 "claimed": false, 00:23:23.305 "zoned": false, 00:23:23.305 "supported_io_types": { 00:23:23.305 "read": true, 00:23:23.305 "write": true, 00:23:23.305 "unmap": true, 00:23:23.305 "flush": false, 00:23:23.305 "reset": true, 00:23:23.305 "nvme_admin": false, 00:23:23.305 "nvme_io": false, 00:23:23.305 "nvme_io_md": false, 00:23:23.305 "write_zeroes": true, 00:23:23.305 "zcopy": false, 00:23:23.305 "get_zone_info": false, 00:23:23.305 "zone_management": false, 00:23:23.305 "zone_append": false, 00:23:23.305 "compare": false, 00:23:23.305 "compare_and_write": false, 00:23:23.305 "abort": false, 00:23:23.305 "seek_hole": true, 00:23:23.305 "seek_data": true, 00:23:23.305 "copy": false, 00:23:23.305 "nvme_iov_md": false 00:23:23.305 }, 00:23:23.305 "driver_specific": { 00:23:23.305 "lvol": { 00:23:23.305 "lvol_store_uuid": "5514005a-dfb3-4dff-b73e-7b02ab119f23", 00:23:23.305 "base_bdev": "nvme0n1", 00:23:23.305 "thin_provision": true, 00:23:23.305 "num_allocated_clusters": 0, 00:23:23.305 "snapshot": false, 00:23:23.305 "clone": false, 00:23:23.305 "esnap_clone": false 00:23:23.305 } 00:23:23.305 } 00:23:23.305 } 00:23:23.305 ]' 00:23:23.305 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:23.564 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:23.565 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:23:23.823 13:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:24.390 { 00:23:24.390 "name": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:24.390 "aliases": [ 00:23:24.390 "lvs/nvme0n1p0" 00:23:24.390 ], 00:23:24.390 "product_name": "Logical Volume", 00:23:24.390 "block_size": 4096, 00:23:24.390 "num_blocks": 26476544, 00:23:24.390 "uuid": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:24.390 "assigned_rate_limits": { 00:23:24.390 "rw_ios_per_sec": 0, 00:23:24.390 "rw_mbytes_per_sec": 0, 00:23:24.390 "r_mbytes_per_sec": 0, 00:23:24.390 "w_mbytes_per_sec": 0 00:23:24.390 }, 00:23:24.390 "claimed": false, 00:23:24.390 "zoned": false, 00:23:24.390 "supported_io_types": { 00:23:24.390 "read": true, 00:23:24.390 "write": true, 00:23:24.390 "unmap": true, 00:23:24.390 "flush": false, 00:23:24.390 "reset": true, 00:23:24.390 "nvme_admin": false, 00:23:24.390 "nvme_io": false, 00:23:24.390 "nvme_io_md": false, 00:23:24.390 "write_zeroes": true, 00:23:24.390 "zcopy": false, 00:23:24.390 "get_zone_info": false, 00:23:24.390 "zone_management": false, 00:23:24.390 "zone_append": false, 00:23:24.390 "compare": false, 00:23:24.390 "compare_and_write": false, 00:23:24.390 "abort": false, 00:23:24.390 "seek_hole": true, 00:23:24.390 "seek_data": true, 00:23:24.390 "copy": false, 00:23:24.390 "nvme_iov_md": false 00:23:24.390 }, 00:23:24.390 "driver_specific": { 00:23:24.390 "lvol": { 00:23:24.390 "lvol_store_uuid": "5514005a-dfb3-4dff-b73e-7b02ab119f23", 00:23:24.390 "base_bdev": "nvme0n1", 00:23:24.390 "thin_provision": true, 00:23:24.390 "num_allocated_clusters": 0, 00:23:24.390 "snapshot": false, 00:23:24.390 "clone": false, 00:23:24.390 "esnap_clone": false 00:23:24.390 } 00:23:24.390 } 00:23:24.390 } 00:23:24.390 ]' 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:24.390 13:57:11 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:24.647 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:23:24.647 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a6183b7-6edc-44d7-99d2-3afb94954ceb 00:23:24.905 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:24.905 { 00:23:24.905 "name": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:24.905 "aliases": [ 00:23:24.905 "lvs/nvme0n1p0" 00:23:24.905 ], 00:23:24.905 "product_name": "Logical Volume", 00:23:24.905 "block_size": 4096, 00:23:24.905 "num_blocks": 26476544, 00:23:24.905 "uuid": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:24.905 "assigned_rate_limits": { 00:23:24.905 "rw_ios_per_sec": 0, 00:23:24.905 "rw_mbytes_per_sec": 0, 00:23:24.905 "r_mbytes_per_sec": 0, 00:23:24.905 "w_mbytes_per_sec": 0 00:23:24.905 }, 00:23:24.905 "claimed": false, 00:23:24.905 "zoned": false, 00:23:24.905 "supported_io_types": { 00:23:24.905 "read": true, 00:23:24.905 "write": true, 00:23:24.905 "unmap": true, 00:23:24.905 "flush": false, 00:23:24.905 "reset": true, 00:23:24.905 "nvme_admin": false, 00:23:24.905 "nvme_io": false, 00:23:24.905 "nvme_io_md": false, 00:23:24.905 "write_zeroes": true, 00:23:24.905 "zcopy": false, 00:23:24.905 "get_zone_info": false, 00:23:24.905 "zone_management": false, 00:23:24.905 "zone_append": false, 00:23:24.905 "compare": false, 00:23:24.905 "compare_and_write": false, 00:23:24.905 "abort": false, 00:23:24.905 "seek_hole": true, 00:23:24.905 "seek_data": true, 00:23:24.905 "copy": false, 00:23:24.905 "nvme_iov_md": false 00:23:24.905 }, 00:23:24.905 "driver_specific": { 00:23:24.905 "lvol": { 00:23:24.905 "lvol_store_uuid": "5514005a-dfb3-4dff-b73e-7b02ab119f23", 00:23:24.905 "base_bdev": "nvme0n1", 00:23:24.905 "thin_provision": true, 00:23:24.905 "num_allocated_clusters": 0, 00:23:24.905 "snapshot": false, 00:23:24.905 "clone": false, 00:23:24.905 "esnap_clone": false 00:23:24.905 } 00:23:24.905 } 00:23:24.905 } 00:23:24.905 ]' 00:23:24.905 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:25.185 13:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0a6183b7-6edc-44d7-99d2-3afb94954ceb -c nvc0n1p0 --l2p_dram_limit 60 00:23:25.444 [2024-11-04 13:57:12.129180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.444 [2024-11-04 13:57:12.129257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:25.445 [2024-11-04 13:57:12.129284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:25.445 [2024-11-04 13:57:12.129302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.129414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.129438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.445 [2024-11-04 13:57:12.129459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:25.445 [2024-11-04 13:57:12.129474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.129553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:25.445 [2024-11-04 13:57:12.130926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:25.445 [2024-11-04 13:57:12.131167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.131195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.445 [2024-11-04 13:57:12.131221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:23:25.445 [2024-11-04 13:57:12.131239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.131452] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2193b2b3-5d34-46c8-976e-980c85dbe05b 00:23:25.445 [2024-11-04 13:57:12.133219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.133284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:25.445 [2024-11-04 13:57:12.133303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:25.445 [2024-11-04 13:57:12.133321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.141712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.142024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.445 [2024-11-04 13:57:12.142057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.260 ms 00:23:25.445 [2024-11-04 13:57:12.142076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.142304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.142331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.445 [2024-11-04 13:57:12.142347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:23:25.445 [2024-11-04 13:57:12.142371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.142496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.142522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:25.445 [2024-11-04 13:57:12.142547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:25.445 [2024-11-04 13:57:12.142596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.142648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.445 [2024-11-04 13:57:12.149845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.149909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.445 [2024-11-04 13:57:12.149935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.198 ms 00:23:25.445 [2024-11-04 13:57:12.149954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.150016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.150032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:25.445 [2024-11-04 13:57:12.150053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:25.445 [2024-11-04 13:57:12.150072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.150141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:25.445 [2024-11-04 13:57:12.150356] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:25.445 [2024-11-04 13:57:12.150398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:25.445 [2024-11-04 13:57:12.150419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:25.445 [2024-11-04 13:57:12.150441] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:25.445 [2024-11-04 13:57:12.150459] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:25.445 [2024-11-04 13:57:12.150480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:25.445 [2024-11-04 13:57:12.150504] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:25.445 [2024-11-04 13:57:12.150527] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:25.445 [2024-11-04 13:57:12.150559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:25.445 [2024-11-04 13:57:12.150601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.150624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:25.445 [2024-11-04 13:57:12.150643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:23:25.445 [2024-11-04 13:57:12.150658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.150788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.445 [2024-11-04 13:57:12.150805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:25.445 [2024-11-04 13:57:12.150823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:25.445 [2024-11-04 13:57:12.150837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.445 [2024-11-04 13:57:12.150993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:25.445 [2024-11-04 13:57:12.151017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:25.445 [2024-11-04 13:57:12.151039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:25.445 [2024-11-04 13:57:12.151087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:25.445 [2024-11-04 13:57:12.151135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.445 [2024-11-04 13:57:12.151165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:25.445 [2024-11-04 13:57:12.151179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:25.445 [2024-11-04 13:57:12.151207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.445 [2024-11-04 13:57:12.151223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:25.445 [2024-11-04 13:57:12.151240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:25.445 [2024-11-04 13:57:12.151253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:25.445 [2024-11-04 13:57:12.151293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:25.445 [2024-11-04 13:57:12.151340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:25.445 [2024-11-04 13:57:12.151384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:25.445 [2024-11-04 13:57:12.151444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:25.445 [2024-11-04 13:57:12.151501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:25.445 [2024-11-04 13:57:12.151575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.445 [2024-11-04 13:57:12.151615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:25.445 [2024-11-04 13:57:12.151653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:25.445 [2024-11-04 13:57:12.151674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.445 [2024-11-04 13:57:12.151692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:25.445 [2024-11-04 13:57:12.151713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:25.445 [2024-11-04 13:57:12.151731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:25.445 [2024-11-04 13:57:12.151778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:25.445 [2024-11-04 13:57:12.151798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151815] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:25.445 [2024-11-04 13:57:12.151837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:25.445 [2024-11-04 13:57:12.151856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.445 [2024-11-04 13:57:12.151877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.445 [2024-11-04 13:57:12.151896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:25.445 [2024-11-04 13:57:12.151923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:25.445 [2024-11-04 13:57:12.151941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:25.446 [2024-11-04 13:57:12.151963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:25.446 [2024-11-04 13:57:12.151985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:25.446 [2024-11-04 13:57:12.152007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:25.446 [2024-11-04 13:57:12.152031] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:25.446 [2024-11-04 13:57:12.152058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:25.446 [2024-11-04 13:57:12.152108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:25.446 [2024-11-04 13:57:12.152128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:25.446 [2024-11-04 13:57:12.152150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:25.446 [2024-11-04 13:57:12.152170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:25.446 [2024-11-04 13:57:12.152193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:25.446 [2024-11-04 13:57:12.152212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:25.446 [2024-11-04 13:57:12.152235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:25.446 [2024-11-04 13:57:12.152255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:25.446 [2024-11-04 13:57:12.152282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:25.446 [2024-11-04 13:57:12.152395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:25.446 [2024-11-04 13:57:12.152419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.446 [2024-11-04 13:57:12.152482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:25.446 [2024-11-04 13:57:12.152502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:25.446 [2024-11-04 13:57:12.152525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:25.446 [2024-11-04 13:57:12.152545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.446 [2024-11-04 13:57:12.152606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:25.446 [2024-11-04 13:57:12.152628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:23:25.446 [2024-11-04 13:57:12.152650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.446 [2024-11-04 13:57:12.152798] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:25.446 [2024-11-04 13:57:12.152890] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:29.632 [2024-11-04 13:57:15.684990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.685284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:29.632 [2024-11-04 13:57:15.685424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3532.168 ms 00:23:29.632 [2024-11-04 13:57:15.685482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.732915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.733197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.632 [2024-11-04 13:57:15.733354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.958 ms 00:23:29.632 [2024-11-04 13:57:15.733475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.733722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.733801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:29.632 [2024-11-04 13:57:15.733954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:29.632 [2024-11-04 13:57:15.734014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.804427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.804767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.632 [2024-11-04 13:57:15.804919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.204 ms 00:23:29.632 [2024-11-04 13:57:15.804983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.805139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.805264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.632 [2024-11-04 13:57:15.805365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:29.632 [2024-11-04 13:57:15.805420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.806134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.806304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.632 [2024-11-04 13:57:15.806453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:23:29.632 [2024-11-04 13:57:15.806606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.806841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.806910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.632 [2024-11-04 13:57:15.807089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:23:29.632 [2024-11-04 13:57:15.807154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.834041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.834298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.632 [2024-11-04 13:57:15.834417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.810 ms 00:23:29.632 [2024-11-04 13:57:15.834495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.632 [2024-11-04 13:57:15.851771] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:29.632 [2024-11-04 13:57:15.870846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.632 [2024-11-04 13:57:15.871142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:29.632 [2024-11-04 13:57:15.871249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.996 ms 00:23:29.633 [2024-11-04 13:57:15.871341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:15.953897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:15.954160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:29.633 [2024-11-04 13:57:15.954278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.441 ms 00:23:29.633 [2024-11-04 13:57:15.954325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:15.954713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:15.954745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:29.633 [2024-11-04 13:57:15.954770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:23:29.633 [2024-11-04 13:57:15.954784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.000949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.001042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:29.633 [2024-11-04 13:57:16.001073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.049 ms 00:23:29.633 [2024-11-04 13:57:16.001091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.046676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.046751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:29.633 [2024-11-04 13:57:16.046779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.480 ms 00:23:29.633 [2024-11-04 13:57:16.046796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.047786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.047967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:29.633 [2024-11-04 13:57:16.048001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:23:29.633 [2024-11-04 13:57:16.048014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.172806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.172894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:29.633 [2024-11-04 13:57:16.172929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.659 ms 00:23:29.633 [2024-11-04 13:57:16.172951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.221501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.221618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:29.633 [2024-11-04 13:57:16.221645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.350 ms 00:23:29.633 [2024-11-04 13:57:16.221659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.269123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.269202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:29.633 [2024-11-04 13:57:16.269226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.361 ms 00:23:29.633 [2024-11-04 13:57:16.269239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.315901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.315993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:29.633 [2024-11-04 13:57:16.316019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.551 ms 00:23:29.633 [2024-11-04 13:57:16.316036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.316139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.316158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:29.633 [2024-11-04 13:57:16.316183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:29.633 [2024-11-04 13:57:16.316204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.316400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.633 [2024-11-04 13:57:16.316425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:29.633 [2024-11-04 13:57:16.316446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:29.633 [2024-11-04 13:57:16.316462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.633 [2024-11-04 13:57:16.318099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4188.181 ms, result 0 00:23:29.633 { 00:23:29.633 "name": "ftl0", 00:23:29.633 "uuid": "2193b2b3-5d34-46c8-976e-980c85dbe05b" 00:23:29.633 } 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:29.633 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:29.890 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:30.148 [ 00:23:30.148 { 00:23:30.148 "name": "ftl0", 00:23:30.148 "aliases": [ 00:23:30.148 "2193b2b3-5d34-46c8-976e-980c85dbe05b" 00:23:30.148 ], 00:23:30.148 "product_name": "FTL disk", 00:23:30.148 "block_size": 4096, 00:23:30.148 "num_blocks": 20971520, 00:23:30.148 "uuid": "2193b2b3-5d34-46c8-976e-980c85dbe05b", 00:23:30.148 "assigned_rate_limits": { 00:23:30.148 "rw_ios_per_sec": 0, 00:23:30.148 "rw_mbytes_per_sec": 0, 00:23:30.148 "r_mbytes_per_sec": 0, 00:23:30.148 "w_mbytes_per_sec": 0 00:23:30.148 }, 00:23:30.148 "claimed": false, 00:23:30.148 "zoned": false, 00:23:30.148 "supported_io_types": { 00:23:30.148 "read": true, 00:23:30.148 "write": true, 00:23:30.148 "unmap": true, 00:23:30.148 "flush": true, 00:23:30.148 "reset": false, 00:23:30.148 "nvme_admin": false, 00:23:30.148 "nvme_io": false, 00:23:30.148 "nvme_io_md": false, 00:23:30.148 "write_zeroes": true, 00:23:30.148 "zcopy": false, 00:23:30.148 "get_zone_info": false, 00:23:30.148 "zone_management": false, 00:23:30.148 "zone_append": false, 00:23:30.148 "compare": false, 00:23:30.148 "compare_and_write": false, 00:23:30.148 "abort": false, 00:23:30.148 "seek_hole": false, 00:23:30.148 "seek_data": false, 00:23:30.148 "copy": false, 00:23:30.148 "nvme_iov_md": false 00:23:30.148 }, 00:23:30.148 "driver_specific": { 00:23:30.148 "ftl": { 00:23:30.148 "base_bdev": "0a6183b7-6edc-44d7-99d2-3afb94954ceb", 00:23:30.148 "cache": "nvc0n1p0" 00:23:30.148 } 00:23:30.148 } 00:23:30.148 } 00:23:30.148 ] 00:23:30.148 13:57:16 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:23:30.148 13:57:16 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:30.148 13:57:16 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:30.411 13:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:30.411 13:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:30.670 [2024-11-04 13:57:17.511501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.511626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.670 [2024-11-04 13:57:17.511653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:30.670 [2024-11-04 13:57:17.511673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.670 [2024-11-04 13:57:17.511729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.670 [2024-11-04 13:57:17.516929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.516979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.670 [2024-11-04 13:57:17.517000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.160 ms 00:23:30.670 [2024-11-04 13:57:17.517014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.670 [2024-11-04 13:57:17.517557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.517594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.670 [2024-11-04 13:57:17.517613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:30.670 [2024-11-04 13:57:17.517625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.670 [2024-11-04 13:57:17.520827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.521013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.670 [2024-11-04 13:57:17.521044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:23:30.670 [2024-11-04 13:57:17.521057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.670 [2024-11-04 13:57:17.527427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.527504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.670 [2024-11-04 13:57:17.527540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:23:30.670 [2024-11-04 13:57:17.527554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.670 [2024-11-04 13:57:17.574485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.670 [2024-11-04 13:57:17.574551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.670 [2024-11-04 13:57:17.574585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.729 ms 00:23:30.670 [2024-11-04 13:57:17.574599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.603517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.603601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.930 [2024-11-04 13:57:17.603626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.779 ms 00:23:30.930 [2024-11-04 13:57:17.603648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.603953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.603977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.930 [2024-11-04 13:57:17.603995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:23:30.930 [2024-11-04 13:57:17.604007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.651793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.651875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:30.930 [2024-11-04 13:57:17.651900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.740 ms 00:23:30.930 [2024-11-04 13:57:17.651913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.698313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.698594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:30.930 [2024-11-04 13:57:17.698632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.302 ms 00:23:30.930 [2024-11-04 13:57:17.698649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.745233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.745527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.930 [2024-11-04 13:57:17.745580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.474 ms 00:23:30.930 [2024-11-04 13:57:17.745595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.930 [2024-11-04 13:57:17.792108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.930 [2024-11-04 13:57:17.792423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.930 [2024-11-04 13:57:17.792462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.277 ms 00:23:30.931 [2024-11-04 13:57:17.792476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.931 [2024-11-04 13:57:17.792603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.931 [2024-11-04 13:57:17.792636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.792991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.931 [2024-11-04 13:57:17.793836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.793988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.932 [2024-11-04 13:57:17.794301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.932 [2024-11-04 13:57:17.794316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2193b2b3-5d34-46c8-976e-980c85dbe05b 00:23:30.932 [2024-11-04 13:57:17.794330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:30.932 [2024-11-04 13:57:17.794350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:30.932 [2024-11-04 13:57:17.794362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:30.932 [2024-11-04 13:57:17.794384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:30.932 [2024-11-04 13:57:17.794396] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.932 [2024-11-04 13:57:17.794413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.932 [2024-11-04 13:57:17.794425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.932 [2024-11-04 13:57:17.794440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.932 [2024-11-04 13:57:17.794451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.932 [2024-11-04 13:57:17.794468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.932 [2024-11-04 13:57:17.794481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.932 [2024-11-04 13:57:17.794498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:23:30.932 [2024-11-04 13:57:17.794511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.932 [2024-11-04 13:57:17.819461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.932 [2024-11-04 13:57:17.819757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.932 [2024-11-04 13:57:17.819792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.807 ms 00:23:30.932 [2024-11-04 13:57:17.819806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.932 [2024-11-04 13:57:17.820449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.932 [2024-11-04 13:57:17.820472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.932 [2024-11-04 13:57:17.820489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:23:30.932 [2024-11-04 13:57:17.820502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.191 [2024-11-04 13:57:17.905166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.191 [2024-11-04 13:57:17.905246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.191 [2024-11-04 13:57:17.905268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.191 [2024-11-04 13:57:17.905282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.191 [2024-11-04 13:57:17.905380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.191 [2024-11-04 13:57:17.905395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.191 [2024-11-04 13:57:17.905411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.191 [2024-11-04 13:57:17.905425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.191 [2024-11-04 13:57:17.905626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.191 [2024-11-04 13:57:17.905648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.191 [2024-11-04 13:57:17.905669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.191 [2024-11-04 13:57:17.905681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.191 [2024-11-04 13:57:17.905721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.191 [2024-11-04 13:57:17.905734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.191 [2024-11-04 13:57:17.905750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.191 [2024-11-04 13:57:17.905763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.191 [2024-11-04 13:57:18.066255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.191 [2024-11-04 13:57:18.066338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.191 [2024-11-04 13:57:18.066361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.191 [2024-11-04 13:57:18.066374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.193694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.193773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.449 [2024-11-04 13:57:18.193795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.193809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.193971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.193989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.449 [2024-11-04 13:57:18.194007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.194139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.449 [2024-11-04 13:57:18.194156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.194359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.449 [2024-11-04 13:57:18.194375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.194493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.449 [2024-11-04 13:57:18.194510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.194627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.449 [2024-11-04 13:57:18.194657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.449 [2024-11-04 13:57:18.194765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.449 [2024-11-04 13:57:18.194780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.449 [2024-11-04 13:57:18.194793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.449 [2024-11-04 13:57:18.194986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 683.454 ms, result 0 00:23:31.449 true 00:23:31.449 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75080 00:23:31.449 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 75080 ']' 00:23:31.449 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 75080 00:23:31.449 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75080 00:23:31.450 killing process with pid 75080 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75080' 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 75080 00:23:31.450 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 75080 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:38.019 13:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:38.019 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:38.019 fio-3.35 00:23:38.019 Starting 1 thread 00:23:43.299 00:23:43.299 test: (groupid=0, jobs=1): err= 0: pid=75321: Mon Nov 4 13:57:29 2024 00:23:43.299 read: IOPS=1033, BW=68.6MiB/s (71.9MB/s)(255MiB/3710msec) 00:23:43.299 slat (nsec): min=4809, max=36864, avg=7959.26, stdev=4001.63 00:23:43.299 clat (usec): min=293, max=951, avg=422.54, stdev=62.91 00:23:43.299 lat (usec): min=299, max=967, avg=430.50, stdev=63.96 00:23:43.299 clat percentiles (usec): 00:23:43.299 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:23:43.299 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 429], 00:23:43.299 | 70.00th=[ 441], 80.00th=[ 474], 90.00th=[ 502], 95.00th=[ 537], 00:23:43.299 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 799], 99.95th=[ 857], 00:23:43.299 | 99.99th=[ 955] 00:23:43.299 write: IOPS=1040, BW=69.1MiB/s (72.5MB/s)(256MiB/3706msec); 0 zone resets 00:23:43.299 slat (usec): min=18, max=106, avg=25.31, stdev= 7.42 00:23:43.299 clat (usec): min=348, max=1308, avg=492.77, stdev=76.21 00:23:43.299 lat (usec): min=383, max=1331, avg=518.08, stdev=77.21 00:23:43.299 clat percentiles (usec): 00:23:43.299 | 1.00th=[ 371], 5.00th=[ 400], 10.00th=[ 429], 20.00th=[ 441], 00:23:43.299 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 474], 60.00th=[ 502], 00:23:43.299 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 619], 00:23:43.299 | 99.00th=[ 775], 99.50th=[ 848], 99.90th=[ 971], 99.95th=[ 1139], 00:23:43.299 | 99.99th=[ 1303] 00:23:43.299 bw ( KiB/s): min=65285, max=74392, per=99.82%, avg=70623.57, stdev=3140.07, samples=7 00:23:43.299 iops : min= 960, max= 1094, avg=1038.57, stdev=46.20, samples=7 00:23:43.299 lat (usec) : 500=74.74%, 750=24.50%, 1000=0.72% 00:23:43.299 lat (msec) : 2=0.04% 00:23:43.299 cpu : usr=98.81%, sys=0.24%, ctx=9, majf=0, minf=1169 00:23:43.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.299 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.299 00:23:43.299 Run status group 0 (all jobs): 00:23:43.299 READ: bw=68.6MiB/s (71.9MB/s), 68.6MiB/s-68.6MiB/s (71.9MB/s-71.9MB/s), io=255MiB (267MB), run=3710-3710msec 00:23:43.299 WRITE: bw=69.1MiB/s (72.5MB/s), 69.1MiB/s-69.1MiB/s (72.5MB/s-72.5MB/s), io=256MiB (269MB), run=3706-3706msec 00:23:45.243 ----------------------------------------------------- 00:23:45.243 Suppressions used: 00:23:45.243 count bytes template 00:23:45.243 1 5 /usr/src/fio/parse.c 00:23:45.243 1 8 libtcmalloc_minimal.so 00:23:45.243 1 904 libcrypto.so 00:23:45.243 ----------------------------------------------------- 00:23:45.243 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:45.243 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:45.244 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:45.244 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:45.244 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:23:45.244 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:45.244 13:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:45.244 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:45.244 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:45.244 fio-3.35 00:23:45.244 Starting 2 threads 00:24:17.309 00:24:17.309 first_half: (groupid=0, jobs=1): err= 0: pid=75430: Mon Nov 4 13:58:03 2024 00:24:17.309 read: IOPS=2186, BW=8746KiB/s (8956kB/s)(256MiB/29943msec) 00:24:17.309 slat (usec): min=4, max=408, avg= 8.07, stdev= 4.30 00:24:17.309 clat (usec): min=939, max=369690, avg=48382.85, stdev=33997.03 00:24:17.309 lat (usec): min=945, max=369700, avg=48390.92, stdev=33997.42 00:24:17.309 clat percentiles (msec): 00:24:17.309 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:24:17.309 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 42], 60.00th=[ 43], 00:24:17.309 | 70.00th=[ 45], 80.00th=[ 49], 90.00th=[ 61], 95.00th=[ 101], 00:24:17.309 | 99.00th=[ 222], 99.50th=[ 247], 99.90th=[ 305], 99.95th=[ 326], 00:24:17.309 | 99.99th=[ 359] 00:24:17.309 write: IOPS=2191, BW=8768KiB/s (8978kB/s)(256MiB/29899msec); 0 zone resets 00:24:17.309 slat (usec): min=4, max=2474, avg= 9.54, stdev=17.54 00:24:17.309 clat (usec): min=408, max=200175, avg=10104.84, stdev=10989.68 00:24:17.309 lat (usec): min=420, max=200182, avg=10114.37, stdev=10989.68 00:24:17.309 clat percentiles (usec): 00:24:17.309 | 1.00th=[ 1139], 5.00th=[ 1582], 10.00th=[ 1942], 20.00th=[ 3752], 00:24:17.309 | 30.00th=[ 5276], 40.00th=[ 6652], 50.00th=[ 7635], 60.00th=[ 8717], 00:24:17.309 | 70.00th=[ 9896], 80.00th=[ 12780], 90.00th=[ 17957], 95.00th=[ 35914], 00:24:17.309 | 99.00th=[ 49546], 99.50th=[ 51643], 99.90th=[ 72877], 99.95th=[196084], 00:24:17.309 | 99.99th=[198181] 00:24:17.309 bw ( KiB/s): min= 296, max=39568, per=100.00%, avg=19285.63, stdev=11764.98, samples=27 00:24:17.309 iops : min= 74, max= 9892, avg=4821.41, stdev=2941.24, samples=27 00:24:17.309 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.17% 00:24:17.309 lat (msec) : 2=5.13%, 4=5.40%, 10=25.12%, 20=11.96%, 50=43.61% 00:24:17.309 lat (msec) : 100=6.02%, 250=2.32%, 500=0.23% 00:24:17.309 cpu : usr=98.13%, sys=0.44%, ctx=74, majf=0, minf=5534 00:24:17.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:17.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.309 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.309 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.309 second_half: (groupid=0, jobs=1): err= 0: pid=75431: Mon Nov 4 13:58:03 2024 00:24:17.309 read: IOPS=2206, BW=8828KiB/s (9040kB/s)(256MiB/29674msec) 00:24:17.309 slat (nsec): min=4008, max=49113, avg=7628.11, stdev=2412.02 00:24:17.309 clat (msec): min=11, max=300, avg=49.48, stdev=31.60 00:24:17.309 lat (msec): min=11, max=300, avg=49.48, stdev=31.60 00:24:17.309 clat percentiles (msec): 00:24:17.309 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:24:17.309 | 30.00th=[ 38], 40.00th=[ 40], 50.00th=[ 42], 60.00th=[ 43], 00:24:17.309 | 70.00th=[ 45], 80.00th=[ 50], 90.00th=[ 63], 95.00th=[ 101], 00:24:17.309 | 99.00th=[ 224], 99.50th=[ 247], 99.90th=[ 275], 99.95th=[ 288], 00:24:17.309 | 99.99th=[ 296] 00:24:17.309 write: IOPS=2219, BW=8878KiB/s (9091kB/s)(256MiB/29527msec); 0 zone resets 00:24:17.309 slat (usec): min=5, max=3414, avg= 9.13, stdev=18.95 00:24:17.309 clat (usec): min=481, max=56614, avg=8489.98, stdev=5853.74 00:24:17.309 lat (usec): min=496, max=56623, avg=8499.11, stdev=5854.30 00:24:17.309 clat percentiles (usec): 00:24:17.309 | 1.00th=[ 1270], 5.00th=[ 2180], 10.00th=[ 3163], 20.00th=[ 4424], 00:24:17.309 | 30.00th=[ 5538], 40.00th=[ 6521], 50.00th=[ 7242], 60.00th=[ 8160], 00:24:17.309 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[15795], 95.00th=[18482], 00:24:17.309 | 99.00th=[33817], 99.50th=[42206], 99.90th=[51119], 99.95th=[52691], 00:24:17.309 | 99.99th=[55837] 00:24:17.309 bw ( KiB/s): min= 1672, max=45880, per=100.00%, avg=20148.77, stdev=13668.67, samples=26 00:24:17.309 iops : min= 418, max=11470, avg=5037.19, stdev=3417.17, samples=26 00:24:17.309 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.13% 00:24:17.309 lat (msec) : 2=1.85%, 4=6.13%, 10=29.79%, 20=10.63%, 50=42.04% 00:24:17.309 lat (msec) : 100=6.86%, 250=2.32%, 500=0.20% 00:24:17.309 cpu : usr=99.02%, sys=0.22%, ctx=40, majf=0, minf=5579 00:24:17.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:17.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.309 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.309 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.309 00:24:17.309 Run status group 0 (all jobs): 00:24:17.309 READ: bw=17.1MiB/s (17.9MB/s), 8746KiB/s-8828KiB/s (8956kB/s-9040kB/s), io=512MiB (536MB), run=29674-29943msec 00:24:17.309 WRITE: bw=17.1MiB/s (18.0MB/s), 8768KiB/s-8878KiB/s (8978kB/s-9091kB/s), io=512MiB (537MB), run=29527-29899msec 00:24:19.872 ----------------------------------------------------- 00:24:19.872 Suppressions used: 00:24:19.872 count bytes template 00:24:19.872 2 10 /usr/src/fio/parse.c 00:24:19.872 2 192 /usr/src/fio/iolog.c 00:24:19.872 1 8 libtcmalloc_minimal.so 00:24:19.872 1 904 libcrypto.so 00:24:19.872 ----------------------------------------------------- 00:24:19.872 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:19.872 13:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:20.130 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:20.130 fio-3.35 00:24:20.130 Starting 1 thread 00:24:38.267 00:24:38.267 test: (groupid=0, jobs=1): err= 0: pid=75818: Mon Nov 4 13:58:24 2024 00:24:38.267 read: IOPS=6403, BW=25.0MiB/s (26.2MB/s)(255MiB/10183msec) 00:24:38.267 slat (nsec): min=3606, max=88215, avg=6513.31, stdev=2136.39 00:24:38.267 clat (usec): min=889, max=36122, avg=19978.41, stdev=2852.45 00:24:38.267 lat (usec): min=907, max=36129, avg=19984.92, stdev=2852.74 00:24:38.267 clat percentiles (usec): 00:24:38.267 | 1.00th=[16712], 5.00th=[16909], 10.00th=[17171], 20.00th=[17433], 00:24:38.267 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19268], 60.00th=[20317], 00:24:38.267 | 70.00th=[21103], 80.00th=[21890], 90.00th=[23725], 95.00th=[25297], 00:24:38.267 | 99.00th=[28967], 99.50th=[31065], 99.90th=[33817], 99.95th=[35390], 00:24:38.267 | 99.99th=[35914] 00:24:38.267 write: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(256MiB/5978msec); 0 zone resets 00:24:38.267 slat (usec): min=4, max=673, avg= 9.14, stdev= 5.96 00:24:38.267 clat (usec): min=632, max=65448, avg=11619.53, stdev=13997.23 00:24:38.267 lat (usec): min=639, max=65457, avg=11628.67, stdev=13997.23 00:24:38.267 clat percentiles (usec): 00:24:38.267 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1483], 00:24:38.267 | 30.00th=[ 1713], 40.00th=[ 2245], 50.00th=[ 8225], 60.00th=[ 9372], 00:24:38.267 | 70.00th=[10814], 80.00th=[13566], 90.00th=[40633], 95.00th=[43779], 00:24:38.267 | 99.00th=[49021], 99.50th=[53216], 99.90th=[62129], 99.95th=[63177], 00:24:38.267 | 99.99th=[63701] 00:24:38.267 bw ( KiB/s): min=36160, max=58968, per=99.63%, avg=43690.67, stdev=6825.12, samples=12 00:24:38.267 iops : min= 9040, max=14742, avg=10922.67, stdev=1706.28, samples=12 00:24:38.267 lat (usec) : 750=0.03%, 1000=0.75% 00:24:38.267 lat (msec) : 2=18.07%, 4=2.13%, 10=11.38%, 20=37.55%, 50=29.68% 00:24:38.267 lat (msec) : 100=0.41% 00:24:38.267 cpu : usr=98.73%, sys=0.35%, ctx=32, majf=0, minf=5565 00:24:38.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:38.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.267 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.267 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.267 00:24:38.267 Run status group 0 (all jobs): 00:24:38.267 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=255MiB (267MB), run=10183-10183msec 00:24:38.267 WRITE: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=256MiB (268MB), run=5978-5978msec 00:24:40.796 ----------------------------------------------------- 00:24:40.796 Suppressions used: 00:24:40.796 count bytes template 00:24:40.796 1 5 /usr/src/fio/parse.c 00:24:40.796 2 192 /usr/src/fio/iolog.c 00:24:40.796 1 8 libtcmalloc_minimal.so 00:24:40.796 1 904 libcrypto.so 00:24:40.796 ----------------------------------------------------- 00:24:40.796 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:40.796 Remove shared memory files 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58407 /dev/shm/spdk_tgt_trace.pid73972 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:40.796 ************************************ 00:24:40.796 END TEST ftl_fio_basic 00:24:40.796 ************************************ 00:24:40.796 00:24:40.796 real 1m20.743s 00:24:40.796 user 2m55.269s 00:24:40.796 sys 0m4.768s 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:40.796 13:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:40.796 13:58:27 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:40.796 13:58:27 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:40.796 13:58:27 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:40.796 13:58:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:40.796 ************************************ 00:24:40.796 START TEST ftl_bdevperf 00:24:40.796 ************************************ 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:40.796 * Looking for test storage... 00:24:40.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.796 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.797 --rc genhtml_branch_coverage=1 00:24:40.797 --rc genhtml_function_coverage=1 00:24:40.797 --rc genhtml_legend=1 00:24:40.797 --rc geninfo_all_blocks=1 00:24:40.797 --rc geninfo_unexecuted_blocks=1 00:24:40.797 00:24:40.797 ' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.797 --rc genhtml_branch_coverage=1 00:24:40.797 --rc genhtml_function_coverage=1 00:24:40.797 --rc genhtml_legend=1 00:24:40.797 --rc geninfo_all_blocks=1 00:24:40.797 --rc geninfo_unexecuted_blocks=1 00:24:40.797 00:24:40.797 ' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.797 --rc genhtml_branch_coverage=1 00:24:40.797 --rc genhtml_function_coverage=1 00:24:40.797 --rc genhtml_legend=1 00:24:40.797 --rc geninfo_all_blocks=1 00:24:40.797 --rc geninfo_unexecuted_blocks=1 00:24:40.797 00:24:40.797 ' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.797 --rc genhtml_branch_coverage=1 00:24:40.797 --rc genhtml_function_coverage=1 00:24:40.797 --rc genhtml_legend=1 00:24:40.797 --rc geninfo_all_blocks=1 00:24:40.797 --rc geninfo_unexecuted_blocks=1 00:24:40.797 00:24:40.797 ' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76080 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76080 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 76080 ']' 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:40.797 13:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.797 [2024-11-04 13:58:27.716787] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:24:40.797 [2024-11-04 13:58:27.717143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76080 ] 00:24:41.055 [2024-11-04 13:58:27.923756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.388 [2024-11-04 13:58:28.053696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:41.956 13:58:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:24:42.521 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:42.779 { 00:24:42.779 "name": "nvme0n1", 00:24:42.779 "aliases": [ 00:24:42.779 "d637ca58-6190-4447-aa49-179a38887c98" 00:24:42.779 ], 00:24:42.779 "product_name": "NVMe disk", 00:24:42.779 "block_size": 4096, 00:24:42.779 "num_blocks": 1310720, 00:24:42.779 "uuid": "d637ca58-6190-4447-aa49-179a38887c98", 00:24:42.779 "numa_id": -1, 00:24:42.779 "assigned_rate_limits": { 00:24:42.779 "rw_ios_per_sec": 0, 00:24:42.779 "rw_mbytes_per_sec": 0, 00:24:42.779 "r_mbytes_per_sec": 0, 00:24:42.779 "w_mbytes_per_sec": 0 00:24:42.779 }, 00:24:42.779 "claimed": true, 00:24:42.779 "claim_type": "read_many_write_one", 00:24:42.779 "zoned": false, 00:24:42.779 "supported_io_types": { 00:24:42.779 "read": true, 00:24:42.779 "write": true, 00:24:42.779 "unmap": true, 00:24:42.779 "flush": true, 00:24:42.779 "reset": true, 00:24:42.779 "nvme_admin": true, 00:24:42.779 "nvme_io": true, 00:24:42.779 "nvme_io_md": false, 00:24:42.779 "write_zeroes": true, 00:24:42.779 "zcopy": false, 00:24:42.779 "get_zone_info": false, 00:24:42.779 "zone_management": false, 00:24:42.779 "zone_append": false, 00:24:42.779 "compare": true, 00:24:42.779 "compare_and_write": false, 00:24:42.779 "abort": true, 00:24:42.779 "seek_hole": false, 00:24:42.779 "seek_data": false, 00:24:42.779 "copy": true, 00:24:42.779 "nvme_iov_md": false 00:24:42.779 }, 00:24:42.779 "driver_specific": { 00:24:42.779 "nvme": [ 00:24:42.779 { 00:24:42.779 "pci_address": "0000:00:11.0", 00:24:42.779 "trid": { 00:24:42.779 "trtype": "PCIe", 00:24:42.779 "traddr": "0000:00:11.0" 00:24:42.779 }, 00:24:42.779 "ctrlr_data": { 00:24:42.779 "cntlid": 0, 00:24:42.779 "vendor_id": "0x1b36", 00:24:42.779 "model_number": "QEMU NVMe Ctrl", 00:24:42.779 "serial_number": "12341", 00:24:42.779 "firmware_revision": "8.0.0", 00:24:42.779 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:42.779 "oacs": { 00:24:42.779 "security": 0, 00:24:42.779 "format": 1, 00:24:42.779 "firmware": 0, 00:24:42.779 "ns_manage": 1 00:24:42.779 }, 00:24:42.779 "multi_ctrlr": false, 00:24:42.779 "ana_reporting": false 00:24:42.779 }, 00:24:42.779 "vs": { 00:24:42.779 "nvme_version": "1.4" 00:24:42.779 }, 00:24:42.779 "ns_data": { 00:24:42.779 "id": 1, 00:24:42.779 "can_share": false 00:24:42.779 } 00:24:42.779 } 00:24:42.779 ], 00:24:42.779 "mp_policy": "active_passive" 00:24:42.779 } 00:24:42.779 } 00:24:42.779 ]' 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:42.779 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:43.037 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5514005a-dfb3-4dff-b73e-7b02ab119f23 00:24:43.037 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:43.037 13:58:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5514005a-dfb3-4dff-b73e-7b02ab119f23 00:24:43.295 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:43.553 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bc65a3a1-6839-4642-b420-817427150e8a 00:24:43.553 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bc65a3a1-6839-4642-b420-817427150e8a 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:24:43.811 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:44.070 { 00:24:44.070 "name": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:44.070 "aliases": [ 00:24:44.070 "lvs/nvme0n1p0" 00:24:44.070 ], 00:24:44.070 "product_name": "Logical Volume", 00:24:44.070 "block_size": 4096, 00:24:44.070 "num_blocks": 26476544, 00:24:44.070 "uuid": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:44.070 "assigned_rate_limits": { 00:24:44.070 "rw_ios_per_sec": 0, 00:24:44.070 "rw_mbytes_per_sec": 0, 00:24:44.070 "r_mbytes_per_sec": 0, 00:24:44.070 "w_mbytes_per_sec": 0 00:24:44.070 }, 00:24:44.070 "claimed": false, 00:24:44.070 "zoned": false, 00:24:44.070 "supported_io_types": { 00:24:44.070 "read": true, 00:24:44.070 "write": true, 00:24:44.070 "unmap": true, 00:24:44.070 "flush": false, 00:24:44.070 "reset": true, 00:24:44.070 "nvme_admin": false, 00:24:44.070 "nvme_io": false, 00:24:44.070 "nvme_io_md": false, 00:24:44.070 "write_zeroes": true, 00:24:44.070 "zcopy": false, 00:24:44.070 "get_zone_info": false, 00:24:44.070 "zone_management": false, 00:24:44.070 "zone_append": false, 00:24:44.070 "compare": false, 00:24:44.070 "compare_and_write": false, 00:24:44.070 "abort": false, 00:24:44.070 "seek_hole": true, 00:24:44.070 "seek_data": true, 00:24:44.070 "copy": false, 00:24:44.070 "nvme_iov_md": false 00:24:44.070 }, 00:24:44.070 "driver_specific": { 00:24:44.070 "lvol": { 00:24:44.070 "lvol_store_uuid": "bc65a3a1-6839-4642-b420-817427150e8a", 00:24:44.070 "base_bdev": "nvme0n1", 00:24:44.070 "thin_provision": true, 00:24:44.070 "num_allocated_clusters": 0, 00:24:44.070 "snapshot": false, 00:24:44.070 "clone": false, 00:24:44.070 "esnap_clone": false 00:24:44.070 } 00:24:44.070 } 00:24:44.070 } 00:24:44.070 ]' 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:44.070 13:58:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:24:44.636 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:44.894 { 00:24:44.894 "name": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:44.894 "aliases": [ 00:24:44.894 "lvs/nvme0n1p0" 00:24:44.894 ], 00:24:44.894 "product_name": "Logical Volume", 00:24:44.894 "block_size": 4096, 00:24:44.894 "num_blocks": 26476544, 00:24:44.894 "uuid": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:44.894 "assigned_rate_limits": { 00:24:44.894 "rw_ios_per_sec": 0, 00:24:44.894 "rw_mbytes_per_sec": 0, 00:24:44.894 "r_mbytes_per_sec": 0, 00:24:44.894 "w_mbytes_per_sec": 0 00:24:44.894 }, 00:24:44.894 "claimed": false, 00:24:44.894 "zoned": false, 00:24:44.894 "supported_io_types": { 00:24:44.894 "read": true, 00:24:44.894 "write": true, 00:24:44.894 "unmap": true, 00:24:44.894 "flush": false, 00:24:44.894 "reset": true, 00:24:44.894 "nvme_admin": false, 00:24:44.894 "nvme_io": false, 00:24:44.894 "nvme_io_md": false, 00:24:44.894 "write_zeroes": true, 00:24:44.894 "zcopy": false, 00:24:44.894 "get_zone_info": false, 00:24:44.894 "zone_management": false, 00:24:44.894 "zone_append": false, 00:24:44.894 "compare": false, 00:24:44.894 "compare_and_write": false, 00:24:44.894 "abort": false, 00:24:44.894 "seek_hole": true, 00:24:44.894 "seek_data": true, 00:24:44.894 "copy": false, 00:24:44.894 "nvme_iov_md": false 00:24:44.894 }, 00:24:44.894 "driver_specific": { 00:24:44.894 "lvol": { 00:24:44.894 "lvol_store_uuid": "bc65a3a1-6839-4642-b420-817427150e8a", 00:24:44.894 "base_bdev": "nvme0n1", 00:24:44.894 "thin_provision": true, 00:24:44.894 "num_allocated_clusters": 0, 00:24:44.894 "snapshot": false, 00:24:44.894 "clone": false, 00:24:44.894 "esnap_clone": false 00:24:44.894 } 00:24:44.894 } 00:24:44.894 } 00:24:44.894 ]' 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:44.894 13:58:31 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:24:45.153 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:45.719 { 00:24:45.719 "name": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:45.719 "aliases": [ 00:24:45.719 "lvs/nvme0n1p0" 00:24:45.719 ], 00:24:45.719 "product_name": "Logical Volume", 00:24:45.719 "block_size": 4096, 00:24:45.719 "num_blocks": 26476544, 00:24:45.719 "uuid": "8e43406a-521d-4ec4-9956-9d3db4f0f0e9", 00:24:45.719 "assigned_rate_limits": { 00:24:45.719 "rw_ios_per_sec": 0, 00:24:45.719 "rw_mbytes_per_sec": 0, 00:24:45.719 "r_mbytes_per_sec": 0, 00:24:45.719 "w_mbytes_per_sec": 0 00:24:45.719 }, 00:24:45.719 "claimed": false, 00:24:45.719 "zoned": false, 00:24:45.719 "supported_io_types": { 00:24:45.719 "read": true, 00:24:45.719 "write": true, 00:24:45.719 "unmap": true, 00:24:45.719 "flush": false, 00:24:45.719 "reset": true, 00:24:45.719 "nvme_admin": false, 00:24:45.719 "nvme_io": false, 00:24:45.719 "nvme_io_md": false, 00:24:45.719 "write_zeroes": true, 00:24:45.719 "zcopy": false, 00:24:45.719 "get_zone_info": false, 00:24:45.719 "zone_management": false, 00:24:45.719 "zone_append": false, 00:24:45.719 "compare": false, 00:24:45.719 "compare_and_write": false, 00:24:45.719 "abort": false, 00:24:45.719 "seek_hole": true, 00:24:45.719 "seek_data": true, 00:24:45.719 "copy": false, 00:24:45.719 "nvme_iov_md": false 00:24:45.719 }, 00:24:45.719 "driver_specific": { 00:24:45.719 "lvol": { 00:24:45.719 "lvol_store_uuid": "bc65a3a1-6839-4642-b420-817427150e8a", 00:24:45.719 "base_bdev": "nvme0n1", 00:24:45.719 "thin_provision": true, 00:24:45.719 "num_allocated_clusters": 0, 00:24:45.719 "snapshot": false, 00:24:45.719 "clone": false, 00:24:45.719 "esnap_clone": false 00:24:45.719 } 00:24:45.719 } 00:24:45.719 } 00:24:45.719 ]' 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:45.719 13:58:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8e43406a-521d-4ec4-9956-9d3db4f0f0e9 -c nvc0n1p0 --l2p_dram_limit 20 00:24:45.979 [2024-11-04 13:58:32.761357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.761431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:45.979 [2024-11-04 13:58:32.761468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:45.979 [2024-11-04 13:58:32.761484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.761555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.761577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.979 [2024-11-04 13:58:32.761609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:45.979 [2024-11-04 13:58:32.761636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.761661] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:45.979 [2024-11-04 13:58:32.762949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:45.979 [2024-11-04 13:58:32.762980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.762996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.979 [2024-11-04 13:58:32.763010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:24:45.979 [2024-11-04 13:58:32.763025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.763116] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 85c72dfc-1213-40da-8fe2-4267a37a7098 00:24:45.979 [2024-11-04 13:58:32.764648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.764834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:45.979 [2024-11-04 13:58:32.764867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:45.979 [2024-11-04 13:58:32.764884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.772543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.772740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.979 [2024-11-04 13:58:32.772772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.587 ms 00:24:45.979 [2024-11-04 13:58:32.772785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.772924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.772940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.979 [2024-11-04 13:58:32.772962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:45.979 [2024-11-04 13:58:32.772975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.773055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.773069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:45.979 [2024-11-04 13:58:32.773085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:45.979 [2024-11-04 13:58:32.773097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.773127] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:45.979 [2024-11-04 13:58:32.779014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.779053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.979 [2024-11-04 13:58:32.779067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.897 ms 00:24:45.979 [2024-11-04 13:58:32.779101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.779150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.779165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:45.979 [2024-11-04 13:58:32.779177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:45.979 [2024-11-04 13:58:32.779190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.779237] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:45.979 [2024-11-04 13:58:32.779384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:45.979 [2024-11-04 13:58:32.779400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:45.979 [2024-11-04 13:58:32.779419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:45.979 [2024-11-04 13:58:32.779433] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:45.979 [2024-11-04 13:58:32.779449] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:45.979 [2024-11-04 13:58:32.779462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:45.979 [2024-11-04 13:58:32.779476] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:45.979 [2024-11-04 13:58:32.779487] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:45.979 [2024-11-04 13:58:32.779501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:45.979 [2024-11-04 13:58:32.779513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.779530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:45.979 [2024-11-04 13:58:32.779541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:45.979 [2024-11-04 13:58:32.779555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.779651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.979 [2024-11-04 13:58:32.779669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:45.979 [2024-11-04 13:58:32.779681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:45.979 [2024-11-04 13:58:32.779697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.979 [2024-11-04 13:58:32.779806] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:45.979 [2024-11-04 13:58:32.779823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:45.979 [2024-11-04 13:58:32.779840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:45.979 [2024-11-04 13:58:32.779854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.979 [2024-11-04 13:58:32.779867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:45.979 [2024-11-04 13:58:32.779881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:45.979 [2024-11-04 13:58:32.779892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:45.979 [2024-11-04 13:58:32.779906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:45.979 [2024-11-04 13:58:32.779918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:45.979 [2024-11-04 13:58:32.779932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:45.979 [2024-11-04 13:58:32.779944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:45.979 [2024-11-04 13:58:32.779957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:45.979 [2024-11-04 13:58:32.779968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:45.979 [2024-11-04 13:58:32.779995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:45.979 [2024-11-04 13:58:32.780006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:45.979 [2024-11-04 13:58:32.780023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.979 [2024-11-04 13:58:32.780036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:45.979 [2024-11-04 13:58:32.780050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:45.979 [2024-11-04 13:58:32.780061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.979 [2024-11-04 13:58:32.780077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:45.979 [2024-11-04 13:58:32.780089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:45.979 [2024-11-04 13:58:32.780102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:45.979 [2024-11-04 13:58:32.780113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:45.979 [2024-11-04 13:58:32.780128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:45.980 [2024-11-04 13:58:32.780152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:45.980 [2024-11-04 13:58:32.780163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:45.980 [2024-11-04 13:58:32.780188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:45.980 [2024-11-04 13:58:32.780202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:45.980 [2024-11-04 13:58:32.780228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:45.980 [2024-11-04 13:58:32.780240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:45.980 [2024-11-04 13:58:32.780264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:45.980 [2024-11-04 13:58:32.780278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:45.980 [2024-11-04 13:58:32.780289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:45.980 [2024-11-04 13:58:32.780302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:45.980 [2024-11-04 13:58:32.780313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:45.980 [2024-11-04 13:58:32.780327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:45.980 [2024-11-04 13:58:32.780352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:45.980 [2024-11-04 13:58:32.780363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780376] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:45.980 [2024-11-04 13:58:32.780388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:45.980 [2024-11-04 13:58:32.780402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:45.980 [2024-11-04 13:58:32.780414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:45.980 [2024-11-04 13:58:32.780434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:45.980 [2024-11-04 13:58:32.780446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:45.980 [2024-11-04 13:58:32.780460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:45.980 [2024-11-04 13:58:32.780472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:45.980 [2024-11-04 13:58:32.780485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:45.980 [2024-11-04 13:58:32.780497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:45.980 [2024-11-04 13:58:32.780515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:45.980 [2024-11-04 13:58:32.780530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:45.980 [2024-11-04 13:58:32.780559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:45.980 [2024-11-04 13:58:32.780574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:45.980 [2024-11-04 13:58:32.780598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:45.980 [2024-11-04 13:58:32.780614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:45.980 [2024-11-04 13:58:32.780627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:45.980 [2024-11-04 13:58:32.780641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:45.980 [2024-11-04 13:58:32.780653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:45.980 [2024-11-04 13:58:32.780672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:45.980 [2024-11-04 13:58:32.780684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:45.980 [2024-11-04 13:58:32.780753] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:45.980 [2024-11-04 13:58:32.780766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:45.980 [2024-11-04 13:58:32.780797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:45.980 [2024-11-04 13:58:32.780811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:45.980 [2024-11-04 13:58:32.780836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:45.980 [2024-11-04 13:58:32.780852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.980 [2024-11-04 13:58:32.780868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:45.980 [2024-11-04 13:58:32.780883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:24:45.980 [2024-11-04 13:58:32.780895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.980 [2024-11-04 13:58:32.780942] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:45.980 [2024-11-04 13:58:32.780957] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:49.266 [2024-11-04 13:58:35.474177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.474438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:49.266 [2024-11-04 13:58:35.474477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2693.209 ms 00:24:49.266 [2024-11-04 13:58:35.474491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.520015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.520079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.266 [2024-11-04 13:58:35.520102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.144 ms 00:24:49.266 [2024-11-04 13:58:35.520114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.520294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.520310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.266 [2024-11-04 13:58:35.520345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:49.266 [2024-11-04 13:58:35.520369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.586330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.586610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.266 [2024-11-04 13:58:35.586662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.888 ms 00:24:49.266 [2024-11-04 13:58:35.586675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.586738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.586754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.266 [2024-11-04 13:58:35.586770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.266 [2024-11-04 13:58:35.586782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.587337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.587353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.266 [2024-11-04 13:58:35.587368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:24:49.266 [2024-11-04 13:58:35.587380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.587502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.587517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.266 [2024-11-04 13:58:35.587535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:49.266 [2024-11-04 13:58:35.587546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.608903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.609139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.266 [2024-11-04 13:58:35.609172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.332 ms 00:24:49.266 [2024-11-04 13:58:35.609202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.623711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:49.266 [2024-11-04 13:58:35.630025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.630252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.266 [2024-11-04 13:58:35.630282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.685 ms 00:24:49.266 [2024-11-04 13:58:35.630297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.707469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.707866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:49.266 [2024-11-04 13:58:35.707916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.108 ms 00:24:49.266 [2024-11-04 13:58:35.707946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.708245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.708279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.266 [2024-11-04 13:58:35.708297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:24:49.266 [2024-11-04 13:58:35.708315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.751394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.751732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:49.266 [2024-11-04 13:58:35.751780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.980 ms 00:24:49.266 [2024-11-04 13:58:35.751796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.793447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.793771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:49.266 [2024-11-04 13:58:35.793802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.586 ms 00:24:49.266 [2024-11-04 13:58:35.793818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.794696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.794728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.266 [2024-11-04 13:58:35.794741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:24:49.266 [2024-11-04 13:58:35.794755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.906423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.906525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:49.266 [2024-11-04 13:58:35.906545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.540 ms 00:24:49.266 [2024-11-04 13:58:35.906560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.950282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.950661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:49.266 [2024-11-04 13:58:35.950709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.546 ms 00:24:49.266 [2024-11-04 13:58:35.950755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:35.994717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:35.994792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:49.266 [2024-11-04 13:58:35.994822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.861 ms 00:24:49.266 [2024-11-04 13:58:35.994847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:36.038257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:36.038545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.266 [2024-11-04 13:58:36.038586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.338 ms 00:24:49.266 [2024-11-04 13:58:36.038602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:36.038679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:36.038698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.266 [2024-11-04 13:58:36.038711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:49.266 [2024-11-04 13:58:36.038724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:36.038862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.266 [2024-11-04 13:58:36.038878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.266 [2024-11-04 13:58:36.038889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:49.266 [2024-11-04 13:58:36.038903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.266 [2024-11-04 13:58:36.040105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3278.210 ms, result 0 00:24:49.266 { 00:24:49.266 "name": "ftl0", 00:24:49.266 "uuid": "85c72dfc-1213-40da-8fe2-4267a37a7098" 00:24:49.266 } 00:24:49.266 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:49.266 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:49.266 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:49.525 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:49.783 [2024-11-04 13:58:36.468328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:49.783 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:49.783 Zero copy mechanism will not be used. 00:24:49.783 Running I/O for 4 seconds... 00:24:51.672 2024.00 IOPS, 134.41 MiB/s [2024-11-04T13:58:39.528Z] 2097.50 IOPS, 139.29 MiB/s [2024-11-04T13:58:40.903Z] 2157.67 IOPS, 143.28 MiB/s [2024-11-04T13:58:40.903Z] 2212.50 IOPS, 146.92 MiB/s 00:24:53.981 Latency(us) 00:24:53.981 [2024-11-04T13:58:40.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.981 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:53.981 ftl0 : 4.00 2211.72 146.87 0.00 0.00 476.59 185.30 2262.55 00:24:53.981 [2024-11-04T13:58:40.903Z] =================================================================================================================== 00:24:53.981 [2024-11-04T13:58:40.903Z] Total : 2211.72 146.87 0.00 0.00 476.59 185.30 2262.55 00:24:53.981 [2024-11-04 13:58:40.480380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:53.981 { 00:24:53.981 "results": [ 00:24:53.981 { 00:24:53.981 "job": "ftl0", 00:24:53.981 "core_mask": "0x1", 00:24:53.981 "workload": "randwrite", 00:24:53.981 "status": "finished", 00:24:53.981 "queue_depth": 1, 00:24:53.981 "io_size": 69632, 00:24:53.981 "runtime": 4.001858, 00:24:53.981 "iops": 2211.722654826833, 00:24:53.981 "mibps": 146.87220754709438, 00:24:53.981 "io_failed": 0, 00:24:53.981 "io_timeout": 0, 00:24:53.981 "avg_latency_us": 476.5857783086119, 00:24:53.981 "min_latency_us": 185.2952380952381, 00:24:53.981 "max_latency_us": 2262.552380952381 00:24:53.981 } 00:24:53.981 ], 00:24:53.981 "core_count": 1 00:24:53.981 } 00:24:53.981 13:58:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:53.981 [2024-11-04 13:58:40.644498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:53.981 Running I/O for 4 seconds... 00:24:55.851 8517.00 IOPS, 33.27 MiB/s [2024-11-04T13:58:43.710Z] 7909.50 IOPS, 30.90 MiB/s [2024-11-04T13:58:45.083Z] 8313.00 IOPS, 32.47 MiB/s [2024-11-04T13:58:45.083Z] 8310.75 IOPS, 32.46 MiB/s 00:24:58.161 Latency(us) 00:24:58.161 [2024-11-04T13:58:45.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.161 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.161 ftl0 : 4.02 8297.49 32.41 0.00 0.00 15385.13 267.22 31831.77 00:24:58.161 [2024-11-04T13:58:45.083Z] =================================================================================================================== 00:24:58.161 [2024-11-04T13:58:45.083Z] Total : 8297.49 32.41 0.00 0.00 15385.13 0.00 31831.77 00:24:58.161 { 00:24:58.161 "results": [ 00:24:58.161 { 00:24:58.161 "job": "ftl0", 00:24:58.161 "core_mask": "0x1", 00:24:58.161 "workload": "randwrite", 00:24:58.161 "status": "finished", 00:24:58.161 "queue_depth": 128, 00:24:58.161 "io_size": 4096, 00:24:58.161 "runtime": 4.021456, 00:24:58.161 "iops": 8297.492251562619, 00:24:58.161 "mibps": 32.41207910766648, 00:24:58.161 "io_failed": 0, 00:24:58.162 "io_timeout": 0, 00:24:58.162 "avg_latency_us": 15385.133774931215, 00:24:58.162 "min_latency_us": 267.2152380952381, 00:24:58.162 "max_latency_us": 31831.77142857143 00:24:58.162 } 00:24:58.162 ], 00:24:58.162 "core_count": 1 00:24:58.162 } 00:24:58.162 [2024-11-04 13:58:44.679268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:58.162 13:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:58.162 [2024-11-04 13:58:44.866262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:58.162 Running I/O for 4 seconds... 00:25:00.032 5634.00 IOPS, 22.01 MiB/s [2024-11-04T13:58:47.889Z] 5947.50 IOPS, 23.23 MiB/s [2024-11-04T13:58:49.265Z] 5768.33 IOPS, 22.53 MiB/s [2024-11-04T13:58:49.265Z] 5396.00 IOPS, 21.08 MiB/s 00:25:02.343 Latency(us) 00:25:02.343 [2024-11-04T13:58:49.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.343 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.343 Verification LBA range: start 0x0 length 0x1400000 00:25:02.343 ftl0 : 4.02 5401.60 21.10 0.00 0.00 23588.64 306.22 66909.14 00:25:02.343 [2024-11-04T13:58:49.265Z] =================================================================================================================== 00:25:02.343 [2024-11-04T13:58:49.265Z] Total : 5401.60 21.10 0.00 0.00 23588.64 0.00 66909.14 00:25:02.343 { 00:25:02.343 "results": [ 00:25:02.343 { 00:25:02.343 "job": "ftl0", 00:25:02.343 "core_mask": "0x1", 00:25:02.343 "workload": "verify", 00:25:02.343 "status": "finished", 00:25:02.343 "verify_range": { 00:25:02.343 "start": 0, 00:25:02.343 "length": 20971520 00:25:02.343 }, 00:25:02.343 "queue_depth": 128, 00:25:02.343 "io_size": 4096, 00:25:02.343 "runtime": 4.019733, 00:25:02.343 "iops": 5401.602544248585, 00:25:02.343 "mibps": 21.100009938471036, 00:25:02.343 "io_failed": 0, 00:25:02.343 "io_timeout": 0, 00:25:02.343 "avg_latency_us": 23588.642461549258, 00:25:02.343 "min_latency_us": 306.2247619047619, 00:25:02.343 "max_latency_us": 66909.13523809524 00:25:02.343 } 00:25:02.343 ], 00:25:02.343 "core_count": 1 00:25:02.343 } 00:25:02.343 [2024-11-04 13:58:48.924828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:02.343 13:58:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:02.343 [2024-11-04 13:58:49.158350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.343 [2024-11-04 13:58:49.158435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:02.343 [2024-11-04 13:58:49.158465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:02.343 [2024-11-04 13:58:49.158486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.343 [2024-11-04 13:58:49.158525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:02.343 [2024-11-04 13:58:49.165321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.343 [2024-11-04 13:58:49.165369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:02.343 [2024-11-04 13:58:49.165395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:25:02.343 [2024-11-04 13:58:49.165412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.343 [2024-11-04 13:58:49.167255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.343 [2024-11-04 13:58:49.167320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:02.343 [2024-11-04 13:58:49.167346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:25:02.343 [2024-11-04 13:58:49.167363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.358085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.358313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:02.602 [2024-11-04 13:58:49.358354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 190.673 ms 00:25:02.602 [2024-11-04 13:58:49.358368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.364688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.364862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:02.602 [2024-11-04 13:58:49.364893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:25:02.602 [2024-11-04 13:58:49.364916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.409607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.409672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:02.602 [2024-11-04 13:58:49.409695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.567 ms 00:25:02.602 [2024-11-04 13:58:49.409708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.436208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.436270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:02.602 [2024-11-04 13:58:49.436299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.418 ms 00:25:02.602 [2024-11-04 13:58:49.436313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.436524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.436542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:02.602 [2024-11-04 13:58:49.436563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:25:02.602 [2024-11-04 13:58:49.436589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.602 [2024-11-04 13:58:49.483040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.602 [2024-11-04 13:58:49.483106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:02.602 [2024-11-04 13:58:49.483128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.417 ms 00:25:02.602 [2024-11-04 13:58:49.483141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.861 [2024-11-04 13:58:49.528342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.861 [2024-11-04 13:58:49.528631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:02.861 [2024-11-04 13:58:49.528683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.124 ms 00:25:02.861 [2024-11-04 13:58:49.528696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.861 [2024-11-04 13:58:49.570740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.861 [2024-11-04 13:58:49.570791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:02.861 [2024-11-04 13:58:49.570812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.975 ms 00:25:02.861 [2024-11-04 13:58:49.570824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.861 [2024-11-04 13:58:49.612086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.861 [2024-11-04 13:58:49.612139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:02.861 [2024-11-04 13:58:49.612181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.124 ms 00:25:02.861 [2024-11-04 13:58:49.612194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.861 [2024-11-04 13:58:49.612245] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:02.861 [2024-11-04 13:58:49.612266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:02.861 [2024-11-04 13:58:49.612284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:02.861 [2024-11-04 13:58:49.612297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.612990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:02.862 [2024-11-04 13:58:49.613443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:02.863 [2024-11-04 13:58:49.613818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:02.863 [2024-11-04 13:58:49.613833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c72dfc-1213-40da-8fe2-4267a37a7098 00:25:02.863 [2024-11-04 13:58:49.613847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:02.863 [2024-11-04 13:58:49.613862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:02.863 [2024-11-04 13:58:49.613878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:02.863 [2024-11-04 13:58:49.613892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:02.863 [2024-11-04 13:58:49.613904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:02.863 [2024-11-04 13:58:49.613920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:02.863 [2024-11-04 13:58:49.613931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:02.863 [2024-11-04 13:58:49.613949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:02.863 [2024-11-04 13:58:49.613971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:02.863 [2024-11-04 13:58:49.613984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.863 [2024-11-04 13:58:49.613996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:02.863 [2024-11-04 13:58:49.614011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:25:02.863 [2024-11-04 13:58:49.614023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.637039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.863 [2024-11-04 13:58:49.637094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:02.863 [2024-11-04 13:58:49.637114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.952 ms 00:25:02.863 [2024-11-04 13:58:49.637126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.637851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.863 [2024-11-04 13:58:49.637871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:02.863 [2024-11-04 13:58:49.637888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:25:02.863 [2024-11-04 13:58:49.637900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.701838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.863 [2024-11-04 13:58:49.702089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.863 [2024-11-04 13:58:49.702141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.863 [2024-11-04 13:58:49.702155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.702238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.863 [2024-11-04 13:58:49.702252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.863 [2024-11-04 13:58:49.702267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.863 [2024-11-04 13:58:49.702279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.702431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.863 [2024-11-04 13:58:49.702460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.863 [2024-11-04 13:58:49.702476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.863 [2024-11-04 13:58:49.702488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.863 [2024-11-04 13:58:49.702512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.863 [2024-11-04 13:58:49.702525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.863 [2024-11-04 13:58:49.702540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.863 [2024-11-04 13:58:49.702552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.845657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.845933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.123 [2024-11-04 13:58:49.845986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.845999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.966897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.966967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.123 [2024-11-04 13:58:49.966989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.123 [2024-11-04 13:58:49.967170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.123 [2024-11-04 13:58:49.967276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.123 [2024-11-04 13:58:49.967459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:03.123 [2024-11-04 13:58:49.967543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.123 [2024-11-04 13:58:49.967674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:03.123 [2024-11-04 13:58:49.967768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.123 [2024-11-04 13:58:49.967785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:03.123 [2024-11-04 13:58:49.967796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.123 [2024-11-04 13:58:49.967953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 809.551 ms, result 0 00:25:03.123 true 00:25:03.123 13:58:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76080 00:25:03.123 13:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 76080 ']' 00:25:03.123 13:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 76080 00:25:03.123 13:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:25:03.123 13:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76080 00:25:03.123 killing process with pid 76080 00:25:03.123 Received shutdown signal, test time was about 4.000000 seconds 00:25:03.123 00:25:03.123 Latency(us) 00:25:03.123 [2024-11-04T13:58:50.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.123 [2024-11-04T13:58:50.045Z] =================================================================================================================== 00:25:03.123 [2024-11-04T13:58:50.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76080' 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 76080 00:25:03.123 13:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 76080 00:25:06.456 Remove shared memory files 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:06.456 13:58:52 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:06.456 13:58:53 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:06.456 ************************************ 00:25:06.456 END TEST ftl_bdevperf 00:25:06.456 ************************************ 00:25:06.456 00:25:06.456 real 0m25.671s 00:25:06.456 user 0m29.502s 00:25:06.456 sys 0m1.340s 00:25:06.456 13:58:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:06.456 13:58:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:06.456 13:58:53 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:06.456 13:58:53 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:06.456 13:58:53 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:06.456 13:58:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:06.456 ************************************ 00:25:06.456 START TEST ftl_trim 00:25:06.456 ************************************ 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:06.456 * Looking for test storage... 00:25:06.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.456 13:58:53 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:06.456 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:06.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.457 --rc genhtml_branch_coverage=1 00:25:06.457 --rc genhtml_function_coverage=1 00:25:06.457 --rc genhtml_legend=1 00:25:06.457 --rc geninfo_all_blocks=1 00:25:06.457 --rc geninfo_unexecuted_blocks=1 00:25:06.457 00:25:06.457 ' 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:06.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.457 --rc genhtml_branch_coverage=1 00:25:06.457 --rc genhtml_function_coverage=1 00:25:06.457 --rc genhtml_legend=1 00:25:06.457 --rc geninfo_all_blocks=1 00:25:06.457 --rc geninfo_unexecuted_blocks=1 00:25:06.457 00:25:06.457 ' 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:06.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.457 --rc genhtml_branch_coverage=1 00:25:06.457 --rc genhtml_function_coverage=1 00:25:06.457 --rc genhtml_legend=1 00:25:06.457 --rc geninfo_all_blocks=1 00:25:06.457 --rc geninfo_unexecuted_blocks=1 00:25:06.457 00:25:06.457 ' 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:06.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.457 --rc genhtml_branch_coverage=1 00:25:06.457 --rc genhtml_function_coverage=1 00:25:06.457 --rc genhtml_legend=1 00:25:06.457 --rc geninfo_all_blocks=1 00:25:06.457 --rc geninfo_unexecuted_blocks=1 00:25:06.457 00:25:06.457 ' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76448 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:06.457 13:58:53 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76448 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76448 ']' 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:06.457 13:58:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:06.715 [2024-11-04 13:58:53.380731] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:06.715 [2024-11-04 13:58:53.381109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76448 ] 00:25:06.715 [2024-11-04 13:58:53.573869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:06.972 [2024-11-04 13:58:53.760025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.972 [2024-11-04 13:58:53.760177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.972 [2024-11-04 13:58:53.760194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.346 13:58:54 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.346 13:58:54 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:08.346 13:58:54 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:08.346 13:58:55 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:08.346 13:58:55 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:08.346 13:58:55 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:08.346 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:25:08.346 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:08.346 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:25:08.346 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:25:08.346 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:08.911 { 00:25:08.911 "name": "nvme0n1", 00:25:08.911 "aliases": [ 00:25:08.911 "a7897f65-64ab-4ec5-a8f5-7d85f6c66ca6" 00:25:08.911 ], 00:25:08.911 "product_name": "NVMe disk", 00:25:08.911 "block_size": 4096, 00:25:08.911 "num_blocks": 1310720, 00:25:08.911 "uuid": "a7897f65-64ab-4ec5-a8f5-7d85f6c66ca6", 00:25:08.911 "numa_id": -1, 00:25:08.911 "assigned_rate_limits": { 00:25:08.911 "rw_ios_per_sec": 0, 00:25:08.911 "rw_mbytes_per_sec": 0, 00:25:08.911 "r_mbytes_per_sec": 0, 00:25:08.911 "w_mbytes_per_sec": 0 00:25:08.911 }, 00:25:08.911 "claimed": true, 00:25:08.911 "claim_type": "read_many_write_one", 00:25:08.911 "zoned": false, 00:25:08.911 "supported_io_types": { 00:25:08.911 "read": true, 00:25:08.911 "write": true, 00:25:08.911 "unmap": true, 00:25:08.911 "flush": true, 00:25:08.911 "reset": true, 00:25:08.911 "nvme_admin": true, 00:25:08.911 "nvme_io": true, 00:25:08.911 "nvme_io_md": false, 00:25:08.911 "write_zeroes": true, 00:25:08.911 "zcopy": false, 00:25:08.911 "get_zone_info": false, 00:25:08.911 "zone_management": false, 00:25:08.911 "zone_append": false, 00:25:08.911 "compare": true, 00:25:08.911 "compare_and_write": false, 00:25:08.911 "abort": true, 00:25:08.911 "seek_hole": false, 00:25:08.911 "seek_data": false, 00:25:08.911 "copy": true, 00:25:08.911 "nvme_iov_md": false 00:25:08.911 }, 00:25:08.911 "driver_specific": { 00:25:08.911 "nvme": [ 00:25:08.911 { 00:25:08.911 "pci_address": "0000:00:11.0", 00:25:08.911 "trid": { 00:25:08.911 "trtype": "PCIe", 00:25:08.911 "traddr": "0000:00:11.0" 00:25:08.911 }, 00:25:08.911 "ctrlr_data": { 00:25:08.911 "cntlid": 0, 00:25:08.911 "vendor_id": "0x1b36", 00:25:08.911 "model_number": "QEMU NVMe Ctrl", 00:25:08.911 "serial_number": "12341", 00:25:08.911 "firmware_revision": "8.0.0", 00:25:08.911 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:08.911 "oacs": { 00:25:08.911 "security": 0, 00:25:08.911 "format": 1, 00:25:08.911 "firmware": 0, 00:25:08.911 "ns_manage": 1 00:25:08.911 }, 00:25:08.911 "multi_ctrlr": false, 00:25:08.911 "ana_reporting": false 00:25:08.911 }, 00:25:08.911 "vs": { 00:25:08.911 "nvme_version": "1.4" 00:25:08.911 }, 00:25:08.911 "ns_data": { 00:25:08.911 "id": 1, 00:25:08.911 "can_share": false 00:25:08.911 } 00:25:08.911 } 00:25:08.911 ], 00:25:08.911 "mp_policy": "active_passive" 00:25:08.911 } 00:25:08.911 } 00:25:08.911 ]' 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:25:08.911 13:58:55 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:25:08.911 13:58:55 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:08.911 13:58:55 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:08.911 13:58:55 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:08.911 13:58:55 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:08.911 13:58:55 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:09.169 13:58:55 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bc65a3a1-6839-4642-b420-817427150e8a 00:25:09.169 13:58:55 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:09.169 13:58:55 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc65a3a1-6839-4642-b420-817427150e8a 00:25:09.428 13:58:56 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:09.704 13:58:56 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f656c3de-fddc-47a9-a09e-e3c990f8896c 00:25:09.704 13:58:56 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f656c3de-fddc-47a9-a09e-e3c990f8896c 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:09.962 13:58:56 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:09.962 13:58:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:09.962 13:58:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:09.962 13:58:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:25:09.962 13:58:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:25:10.220 13:58:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:10.478 { 00:25:10.478 "name": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:10.478 "aliases": [ 00:25:10.478 "lvs/nvme0n1p0" 00:25:10.478 ], 00:25:10.478 "product_name": "Logical Volume", 00:25:10.478 "block_size": 4096, 00:25:10.478 "num_blocks": 26476544, 00:25:10.478 "uuid": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:10.478 "assigned_rate_limits": { 00:25:10.478 "rw_ios_per_sec": 0, 00:25:10.478 "rw_mbytes_per_sec": 0, 00:25:10.478 "r_mbytes_per_sec": 0, 00:25:10.478 "w_mbytes_per_sec": 0 00:25:10.478 }, 00:25:10.478 "claimed": false, 00:25:10.478 "zoned": false, 00:25:10.478 "supported_io_types": { 00:25:10.478 "read": true, 00:25:10.478 "write": true, 00:25:10.478 "unmap": true, 00:25:10.478 "flush": false, 00:25:10.478 "reset": true, 00:25:10.478 "nvme_admin": false, 00:25:10.478 "nvme_io": false, 00:25:10.478 "nvme_io_md": false, 00:25:10.478 "write_zeroes": true, 00:25:10.478 "zcopy": false, 00:25:10.478 "get_zone_info": false, 00:25:10.478 "zone_management": false, 00:25:10.478 "zone_append": false, 00:25:10.478 "compare": false, 00:25:10.478 "compare_and_write": false, 00:25:10.478 "abort": false, 00:25:10.478 "seek_hole": true, 00:25:10.478 "seek_data": true, 00:25:10.478 "copy": false, 00:25:10.478 "nvme_iov_md": false 00:25:10.478 }, 00:25:10.478 "driver_specific": { 00:25:10.478 "lvol": { 00:25:10.478 "lvol_store_uuid": "f656c3de-fddc-47a9-a09e-e3c990f8896c", 00:25:10.478 "base_bdev": "nvme0n1", 00:25:10.478 "thin_provision": true, 00:25:10.478 "num_allocated_clusters": 0, 00:25:10.478 "snapshot": false, 00:25:10.478 "clone": false, 00:25:10.478 "esnap_clone": false 00:25:10.478 } 00:25:10.478 } 00:25:10.478 } 00:25:10.478 ]' 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:10.478 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:25:10.478 13:58:57 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:10.478 13:58:57 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:10.478 13:58:57 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:11.044 13:58:57 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:11.044 13:58:57 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:11.044 13:58:57 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:11.044 { 00:25:11.044 "name": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:11.044 "aliases": [ 00:25:11.044 "lvs/nvme0n1p0" 00:25:11.044 ], 00:25:11.044 "product_name": "Logical Volume", 00:25:11.044 "block_size": 4096, 00:25:11.044 "num_blocks": 26476544, 00:25:11.044 "uuid": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:11.044 "assigned_rate_limits": { 00:25:11.044 "rw_ios_per_sec": 0, 00:25:11.044 "rw_mbytes_per_sec": 0, 00:25:11.044 "r_mbytes_per_sec": 0, 00:25:11.044 "w_mbytes_per_sec": 0 00:25:11.044 }, 00:25:11.044 "claimed": false, 00:25:11.044 "zoned": false, 00:25:11.044 "supported_io_types": { 00:25:11.044 "read": true, 00:25:11.044 "write": true, 00:25:11.044 "unmap": true, 00:25:11.044 "flush": false, 00:25:11.044 "reset": true, 00:25:11.044 "nvme_admin": false, 00:25:11.044 "nvme_io": false, 00:25:11.044 "nvme_io_md": false, 00:25:11.044 "write_zeroes": true, 00:25:11.044 "zcopy": false, 00:25:11.044 "get_zone_info": false, 00:25:11.044 "zone_management": false, 00:25:11.044 "zone_append": false, 00:25:11.044 "compare": false, 00:25:11.044 "compare_and_write": false, 00:25:11.044 "abort": false, 00:25:11.044 "seek_hole": true, 00:25:11.044 "seek_data": true, 00:25:11.044 "copy": false, 00:25:11.044 "nvme_iov_md": false 00:25:11.044 }, 00:25:11.044 "driver_specific": { 00:25:11.044 "lvol": { 00:25:11.044 "lvol_store_uuid": "f656c3de-fddc-47a9-a09e-e3c990f8896c", 00:25:11.044 "base_bdev": "nvme0n1", 00:25:11.044 "thin_provision": true, 00:25:11.044 "num_allocated_clusters": 0, 00:25:11.044 "snapshot": false, 00:25:11.044 "clone": false, 00:25:11.044 "esnap_clone": false 00:25:11.044 } 00:25:11.044 } 00:25:11.044 } 00:25:11.044 ]' 00:25:11.044 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:11.301 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:25:11.301 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:11.301 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:11.301 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:11.301 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:25:11.301 13:58:58 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:11.301 13:58:58 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:11.558 13:58:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:11.558 13:58:58 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:11.558 13:58:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.558 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.558 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:11.558 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:25:11.558 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:25:11.558 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6307ee10-c6be-4aeb-83e2-54d1b4989f54 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:11.816 { 00:25:11.816 "name": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:11.816 "aliases": [ 00:25:11.816 "lvs/nvme0n1p0" 00:25:11.816 ], 00:25:11.816 "product_name": "Logical Volume", 00:25:11.816 "block_size": 4096, 00:25:11.816 "num_blocks": 26476544, 00:25:11.816 "uuid": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:11.816 "assigned_rate_limits": { 00:25:11.816 "rw_ios_per_sec": 0, 00:25:11.816 "rw_mbytes_per_sec": 0, 00:25:11.816 "r_mbytes_per_sec": 0, 00:25:11.816 "w_mbytes_per_sec": 0 00:25:11.816 }, 00:25:11.816 "claimed": false, 00:25:11.816 "zoned": false, 00:25:11.816 "supported_io_types": { 00:25:11.816 "read": true, 00:25:11.816 "write": true, 00:25:11.816 "unmap": true, 00:25:11.816 "flush": false, 00:25:11.816 "reset": true, 00:25:11.816 "nvme_admin": false, 00:25:11.816 "nvme_io": false, 00:25:11.816 "nvme_io_md": false, 00:25:11.816 "write_zeroes": true, 00:25:11.816 "zcopy": false, 00:25:11.816 "get_zone_info": false, 00:25:11.816 "zone_management": false, 00:25:11.816 "zone_append": false, 00:25:11.816 "compare": false, 00:25:11.816 "compare_and_write": false, 00:25:11.816 "abort": false, 00:25:11.816 "seek_hole": true, 00:25:11.816 "seek_data": true, 00:25:11.816 "copy": false, 00:25:11.816 "nvme_iov_md": false 00:25:11.816 }, 00:25:11.816 "driver_specific": { 00:25:11.816 "lvol": { 00:25:11.816 "lvol_store_uuid": "f656c3de-fddc-47a9-a09e-e3c990f8896c", 00:25:11.816 "base_bdev": "nvme0n1", 00:25:11.816 "thin_provision": true, 00:25:11.816 "num_allocated_clusters": 0, 00:25:11.816 "snapshot": false, 00:25:11.816 "clone": false, 00:25:11.816 "esnap_clone": false 00:25:11.816 } 00:25:11.816 } 00:25:11.816 } 00:25:11.816 ]' 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:25:11.816 13:58:58 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:25:11.816 13:58:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:11.816 13:58:58 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6307ee10-c6be-4aeb-83e2-54d1b4989f54 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:12.076 [2024-11-04 13:58:58.949386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.949468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:12.076 [2024-11-04 13:58:58.949502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:12.076 [2024-11-04 13:58:58.949530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.954211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.954470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.076 [2024-11-04 13:58:58.954523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:25:12.076 [2024-11-04 13:58:58.954543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.954846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:12.076 [2024-11-04 13:58:58.956251] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:12.076 [2024-11-04 13:58:58.956316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.956336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.076 [2024-11-04 13:58:58.956360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:25:12.076 [2024-11-04 13:58:58.956379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.956614] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:12.076 [2024-11-04 13:58:58.958509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.958595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:12.076 [2024-11-04 13:58:58.958621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:12.076 [2024-11-04 13:58:58.958643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.967058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.967136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.076 [2024-11-04 13:58:58.967162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.259 ms 00:25:12.076 [2024-11-04 13:58:58.967185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.967439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.967469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.076 [2024-11-04 13:58:58.967486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:12.076 [2024-11-04 13:58:58.967514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.076 [2024-11-04 13:58:58.967589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.076 [2024-11-04 13:58:58.967612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:12.077 [2024-11-04 13:58:58.967628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:12.077 [2024-11-04 13:58:58.967646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.077 [2024-11-04 13:58:58.967701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:12.077 [2024-11-04 13:58:58.973877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.077 [2024-11-04 13:58:58.973926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.077 [2024-11-04 13:58:58.973955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:25:12.077 [2024-11-04 13:58:58.973971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.077 [2024-11-04 13:58:58.974066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.077 [2024-11-04 13:58:58.974083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:12.077 [2024-11-04 13:58:58.974104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:12.077 [2024-11-04 13:58:58.974141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.077 [2024-11-04 13:58:58.974194] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:12.077 [2024-11-04 13:58:58.974356] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:12.077 [2024-11-04 13:58:58.974384] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:12.077 [2024-11-04 13:58:58.974405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:12.077 [2024-11-04 13:58:58.974427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:12.077 [2024-11-04 13:58:58.974445] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:12.077 [2024-11-04 13:58:58.974464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:12.077 [2024-11-04 13:58:58.974479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:12.077 [2024-11-04 13:58:58.974498] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:12.077 [2024-11-04 13:58:58.974516] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:12.077 [2024-11-04 13:58:58.974535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.077 [2024-11-04 13:58:58.974550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:12.077 [2024-11-04 13:58:58.974590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:25:12.077 [2024-11-04 13:58:58.974606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.077 [2024-11-04 13:58:58.974722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.077 [2024-11-04 13:58:58.974737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:12.077 [2024-11-04 13:58:58.974757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:12.077 [2024-11-04 13:58:58.974771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.077 [2024-11-04 13:58:58.974917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:12.077 [2024-11-04 13:58:58.974941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:12.077 [2024-11-04 13:58:58.974961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:12.077 [2024-11-04 13:58:58.974977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.974996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:12.077 [2024-11-04 13:58:58.975010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:12.077 [2024-11-04 13:58:58.975061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:12.077 [2024-11-04 13:58:58.975093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:12.077 [2024-11-04 13:58:58.975107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:12.077 [2024-11-04 13:58:58.975125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:12.077 [2024-11-04 13:58:58.975139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:12.077 [2024-11-04 13:58:58.975157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:12.077 [2024-11-04 13:58:58.975171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:12.077 [2024-11-04 13:58:58.975206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:12.077 [2024-11-04 13:58:58.975258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:12.077 [2024-11-04 13:58:58.975304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:12.077 [2024-11-04 13:58:58.975353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:12.077 [2024-11-04 13:58:58.975399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:12.077 [2024-11-04 13:58:58.975451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:12.077 [2024-11-04 13:58:58.975482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:12.077 [2024-11-04 13:58:58.975497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:12.077 [2024-11-04 13:58:58.975514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:12.077 [2024-11-04 13:58:58.975531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:12.077 [2024-11-04 13:58:58.975549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:12.077 [2024-11-04 13:58:58.975574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:12.077 [2024-11-04 13:58:58.975609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:12.077 [2024-11-04 13:58:58.975631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975645] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:12.077 [2024-11-04 13:58:58.975663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:12.077 [2024-11-04 13:58:58.975678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.077 [2024-11-04 13:58:58.975712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:12.077 [2024-11-04 13:58:58.975735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:12.077 [2024-11-04 13:58:58.975749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:12.077 [2024-11-04 13:58:58.975766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:12.077 [2024-11-04 13:58:58.975780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:12.077 [2024-11-04 13:58:58.975798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:12.077 [2024-11-04 13:58:58.975826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:12.077 [2024-11-04 13:58:58.975854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:12.077 [2024-11-04 13:58:58.975878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:12.077 [2024-11-04 13:58:58.975897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:12.077 [2024-11-04 13:58:58.975912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:12.077 [2024-11-04 13:58:58.975931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:12.077 [2024-11-04 13:58:58.975947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:12.077 [2024-11-04 13:58:58.975965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:12.077 [2024-11-04 13:58:58.975980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:12.077 [2024-11-04 13:58:58.975999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:12.077 [2024-11-04 13:58:58.976014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:12.077 [2024-11-04 13:58:58.976035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:12.077 [2024-11-04 13:58:58.976050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:12.077 [2024-11-04 13:58:58.976069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:12.077 [2024-11-04 13:58:58.976084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:12.077 [2024-11-04 13:58:58.976103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:12.077 [2024-11-04 13:58:58.976118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:12.077 [2024-11-04 13:58:58.976149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:12.078 [2024-11-04 13:58:58.976164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:12.078 [2024-11-04 13:58:58.976185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:12.078 [2024-11-04 13:58:58.976200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:12.078 [2024-11-04 13:58:58.976220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:12.078 [2024-11-04 13:58:58.976236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.078 [2024-11-04 13:58:58.976255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:12.078 [2024-11-04 13:58:58.976271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:25:12.078 [2024-11-04 13:58:58.976289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.078 [2024-11-04 13:58:58.976398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:12.078 [2024-11-04 13:58:58.976442] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:15.389 [2024-11-04 13:59:01.740319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.740675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:15.389 [2024-11-04 13:59:01.740805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2763.901 ms 00:25:15.389 [2024-11-04 13:59:01.740879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.785016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.785321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.389 [2024-11-04 13:59:01.785459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.639 ms 00:25:15.389 [2024-11-04 13:59:01.785517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.785858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.785920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:15.389 [2024-11-04 13:59:01.786039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:15.389 [2024-11-04 13:59:01.786102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.855430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.855515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.389 [2024-11-04 13:59:01.855546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.209 ms 00:25:15.389 [2024-11-04 13:59:01.855597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.855793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.855826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.389 [2024-11-04 13:59:01.855850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:15.389 [2024-11-04 13:59:01.855877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.856449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.856511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.389 [2024-11-04 13:59:01.856536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:25:15.389 [2024-11-04 13:59:01.856578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.856775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.856804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.389 [2024-11-04 13:59:01.856826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:15.389 [2024-11-04 13:59:01.856872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.881679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.881947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.389 [2024-11-04 13:59:01.881979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.714 ms 00:25:15.389 [2024-11-04 13:59:01.882000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.897537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:15.389 [2024-11-04 13:59:01.916092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.916173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:15.389 [2024-11-04 13:59:01.916204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.903 ms 00:25:15.389 [2024-11-04 13:59:01.916219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.995045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.995132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:15.389 [2024-11-04 13:59:01.995160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.667 ms 00:25:15.389 [2024-11-04 13:59:01.995175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:01.995475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:01.995495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:15.389 [2024-11-04 13:59:01.995519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:25:15.389 [2024-11-04 13:59:01.995533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.039863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.039934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:15.389 [2024-11-04 13:59:02.039972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.246 ms 00:25:15.389 [2024-11-04 13:59:02.039988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.082578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.082832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:15.389 [2024-11-04 13:59:02.082877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.437 ms 00:25:15.389 [2024-11-04 13:59:02.082893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.083940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.083979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:15.389 [2024-11-04 13:59:02.084001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:25:15.389 [2024-11-04 13:59:02.084016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.198293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.198373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:15.389 [2024-11-04 13:59:02.198427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.209 ms 00:25:15.389 [2024-11-04 13:59:02.198443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.248123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.248210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:15.389 [2024-11-04 13:59:02.248238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.468 ms 00:25:15.389 [2024-11-04 13:59:02.248255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.389 [2024-11-04 13:59:02.295664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.389 [2024-11-04 13:59:02.295741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:15.389 [2024-11-04 13:59:02.295767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.238 ms 00:25:15.389 [2024-11-04 13:59:02.295783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.647 [2024-11-04 13:59:02.341920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.647 [2024-11-04 13:59:02.342190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:15.647 [2024-11-04 13:59:02.342230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.995 ms 00:25:15.647 [2024-11-04 13:59:02.342267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.647 [2024-11-04 13:59:02.342409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.647 [2024-11-04 13:59:02.342433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:15.647 [2024-11-04 13:59:02.342458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:15.647 [2024-11-04 13:59:02.342473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.647 [2024-11-04 13:59:02.342616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.647 [2024-11-04 13:59:02.342636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:15.647 [2024-11-04 13:59:02.342655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:15.647 [2024-11-04 13:59:02.342670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.647 [2024-11-04 13:59:02.344772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:15.647 [2024-11-04 13:59:02.351727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3394.573 ms, result 0 00:25:15.647 [2024-11-04 13:59:02.353014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:25:15.647 "name": "ftl0", 00:25:15.647 "uuid": "78bfc9c3-5954-4389-a76d-b2e42aa87556" 00:25:15.647 } 00:25:15.647 p_thread 00:25:15.647 13:59:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:15.647 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:15.905 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:16.164 [ 00:25:16.164 { 00:25:16.164 "name": "ftl0", 00:25:16.164 "aliases": [ 00:25:16.164 "78bfc9c3-5954-4389-a76d-b2e42aa87556" 00:25:16.164 ], 00:25:16.164 "product_name": "FTL disk", 00:25:16.164 "block_size": 4096, 00:25:16.164 "num_blocks": 23592960, 00:25:16.164 "uuid": "78bfc9c3-5954-4389-a76d-b2e42aa87556", 00:25:16.164 "assigned_rate_limits": { 00:25:16.164 "rw_ios_per_sec": 0, 00:25:16.164 "rw_mbytes_per_sec": 0, 00:25:16.164 "r_mbytes_per_sec": 0, 00:25:16.164 "w_mbytes_per_sec": 0 00:25:16.164 }, 00:25:16.164 "claimed": false, 00:25:16.164 "zoned": false, 00:25:16.164 "supported_io_types": { 00:25:16.164 "read": true, 00:25:16.164 "write": true, 00:25:16.164 "unmap": true, 00:25:16.164 "flush": true, 00:25:16.164 "reset": false, 00:25:16.164 "nvme_admin": false, 00:25:16.164 "nvme_io": false, 00:25:16.164 "nvme_io_md": false, 00:25:16.164 "write_zeroes": true, 00:25:16.164 "zcopy": false, 00:25:16.164 "get_zone_info": false, 00:25:16.164 "zone_management": false, 00:25:16.164 "zone_append": false, 00:25:16.164 "compare": false, 00:25:16.164 "compare_and_write": false, 00:25:16.164 "abort": false, 00:25:16.164 "seek_hole": false, 00:25:16.164 "seek_data": false, 00:25:16.164 "copy": false, 00:25:16.164 "nvme_iov_md": false 00:25:16.164 }, 00:25:16.164 "driver_specific": { 00:25:16.164 "ftl": { 00:25:16.164 "base_bdev": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:16.164 "cache": "nvc0n1p0" 00:25:16.164 } 00:25:16.164 } 00:25:16.164 } 00:25:16.164 ] 00:25:16.164 13:59:03 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:25:16.164 13:59:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:16.164 13:59:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:16.423 13:59:03 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:16.423 13:59:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:16.682 13:59:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:16.682 { 00:25:16.682 "name": "ftl0", 00:25:16.682 "aliases": [ 00:25:16.682 "78bfc9c3-5954-4389-a76d-b2e42aa87556" 00:25:16.682 ], 00:25:16.682 "product_name": "FTL disk", 00:25:16.682 "block_size": 4096, 00:25:16.682 "num_blocks": 23592960, 00:25:16.682 "uuid": "78bfc9c3-5954-4389-a76d-b2e42aa87556", 00:25:16.682 "assigned_rate_limits": { 00:25:16.682 "rw_ios_per_sec": 0, 00:25:16.682 "rw_mbytes_per_sec": 0, 00:25:16.682 "r_mbytes_per_sec": 0, 00:25:16.682 "w_mbytes_per_sec": 0 00:25:16.682 }, 00:25:16.682 "claimed": false, 00:25:16.682 "zoned": false, 00:25:16.682 "supported_io_types": { 00:25:16.682 "read": true, 00:25:16.682 "write": true, 00:25:16.682 "unmap": true, 00:25:16.682 "flush": true, 00:25:16.682 "reset": false, 00:25:16.682 "nvme_admin": false, 00:25:16.682 "nvme_io": false, 00:25:16.682 "nvme_io_md": false, 00:25:16.682 "write_zeroes": true, 00:25:16.682 "zcopy": false, 00:25:16.682 "get_zone_info": false, 00:25:16.682 "zone_management": false, 00:25:16.682 "zone_append": false, 00:25:16.682 "compare": false, 00:25:16.682 "compare_and_write": false, 00:25:16.682 "abort": false, 00:25:16.682 "seek_hole": false, 00:25:16.682 "seek_data": false, 00:25:16.682 "copy": false, 00:25:16.682 "nvme_iov_md": false 00:25:16.682 }, 00:25:16.682 "driver_specific": { 00:25:16.682 "ftl": { 00:25:16.682 "base_bdev": "6307ee10-c6be-4aeb-83e2-54d1b4989f54", 00:25:16.682 "cache": "nvc0n1p0" 00:25:16.682 } 00:25:16.682 } 00:25:16.682 } 00:25:16.682 ]' 00:25:16.682 13:59:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:16.682 13:59:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:16.682 13:59:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:16.940 [2024-11-04 13:59:03.756636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.756717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.940 [2024-11-04 13:59:03.756757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:16.940 [2024-11-04 13:59:03.756789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.756862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:16.940 [2024-11-04 13:59:03.761951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.762005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.940 [2024-11-04 13:59:03.762030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.050 ms 00:25:16.940 [2024-11-04 13:59:03.762044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.762707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.762880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.940 [2024-11-04 13:59:03.762911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:25:16.940 [2024-11-04 13:59:03.762924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.766818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.766850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.940 [2024-11-04 13:59:03.766867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.847 ms 00:25:16.940 [2024-11-04 13:59:03.766879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.773913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.773955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.940 [2024-11-04 13:59:03.773973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.969 ms 00:25:16.940 [2024-11-04 13:59:03.773987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.819872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.819944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.940 [2024-11-04 13:59:03.819991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.755 ms 00:25:16.940 [2024-11-04 13:59:03.820007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.847619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.847848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.940 [2024-11-04 13:59:03.847901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.430 ms 00:25:16.940 [2024-11-04 13:59:03.847927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.940 [2024-11-04 13:59:03.848287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.940 [2024-11-04 13:59:03.848323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.940 [2024-11-04 13:59:03.848354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:25:16.940 [2024-11-04 13:59:03.848374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.200 [2024-11-04 13:59:03.895258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.200 [2024-11-04 13:59:03.895514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:17.200 [2024-11-04 13:59:03.895562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.823 ms 00:25:17.200 [2024-11-04 13:59:03.895596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.200 [2024-11-04 13:59:03.941469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.200 [2024-11-04 13:59:03.941537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:17.200 [2024-11-04 13:59:03.941594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.666 ms 00:25:17.200 [2024-11-04 13:59:03.941616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.200 [2024-11-04 13:59:03.986641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.200 [2024-11-04 13:59:03.986707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:17.200 [2024-11-04 13:59:03.986742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.846 ms 00:25:17.200 [2024-11-04 13:59:03.986759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.200 [2024-11-04 13:59:04.030850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.200 [2024-11-04 13:59:04.030918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:17.200 [2024-11-04 13:59:04.030952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.853 ms 00:25:17.200 [2024-11-04 13:59:04.030969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.200 [2024-11-04 13:59:04.031135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:17.200 [2024-11-04 13:59:04.031169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:17.200 [2024-11-04 13:59:04.031674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.031998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.032977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:17.201 [2024-11-04 13:59:04.033992] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:17.201 [2024-11-04 13:59:04.034022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:17.201 [2024-11-04 13:59:04.034043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:17.201 [2024-11-04 13:59:04.034066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:17.201 [2024-11-04 13:59:04.034086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:17.201 [2024-11-04 13:59:04.034110] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:17.201 [2024-11-04 13:59:04.034136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:17.201 [2024-11-04 13:59:04.034159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:17.201 [2024-11-04 13:59:04.034177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:17.201 [2024-11-04 13:59:04.034199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:17.201 [2024-11-04 13:59:04.034218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:17.201 [2024-11-04 13:59:04.034243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.201 [2024-11-04 13:59:04.034265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:17.201 [2024-11-04 13:59:04.034292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:25:17.202 [2024-11-04 13:59:04.034311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.202 [2024-11-04 13:59:04.059838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.202 [2024-11-04 13:59:04.060066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:17.202 [2024-11-04 13:59:04.060120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.452 ms 00:25:17.202 [2024-11-04 13:59:04.060139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.202 [2024-11-04 13:59:04.060944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.202 [2024-11-04 13:59:04.060982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:17.202 [2024-11-04 13:59:04.061013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:25:17.202 [2024-11-04 13:59:04.061032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.460 [2024-11-04 13:59:04.144889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.460 [2024-11-04 13:59:04.145150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:17.460 [2024-11-04 13:59:04.145197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.460 [2024-11-04 13:59:04.145217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.460 [2024-11-04 13:59:04.145457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.460 [2024-11-04 13:59:04.145483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:17.460 [2024-11-04 13:59:04.145509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.460 [2024-11-04 13:59:04.145528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.460 [2024-11-04 13:59:04.145692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.460 [2024-11-04 13:59:04.145721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:17.460 [2024-11-04 13:59:04.145755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.460 [2024-11-04 13:59:04.145776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.460 [2024-11-04 13:59:04.145835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.460 [2024-11-04 13:59:04.145859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:17.460 [2024-11-04 13:59:04.145884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.460 [2024-11-04 13:59:04.145902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.460 [2024-11-04 13:59:04.309039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.460 [2024-11-04 13:59:04.309118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.460 [2024-11-04 13:59:04.309153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.461 [2024-11-04 13:59:04.309170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.434602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.434858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.720 [2024-11-04 13:59:04.434906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.434925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.435135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.435162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.720 [2024-11-04 13:59:04.435214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.435240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.435327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.435349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.720 [2024-11-04 13:59:04.435373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.435392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.435634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.435662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.720 [2024-11-04 13:59:04.435688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.435708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.435818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.435843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:17.720 [2024-11-04 13:59:04.435868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.435886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.435975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.435998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.720 [2024-11-04 13:59:04.436027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.436046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.436148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.720 [2024-11-04 13:59:04.436170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.720 [2024-11-04 13:59:04.436194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.720 [2024-11-04 13:59:04.436215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.720 [2024-11-04 13:59:04.436507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.865 ms, result 0 00:25:17.720 true 00:25:17.720 13:59:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76448 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76448 ']' 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76448 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76448 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:17.720 killing process with pid 76448 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76448' 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76448 00:25:17.720 13:59:04 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76448 00:25:24.286 13:59:10 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:24.544 65536+0 records in 00:25:24.544 65536+0 records out 00:25:24.544 268435456 bytes (268 MB, 256 MiB) copied, 1.37209 s, 196 MB/s 00:25:24.544 13:59:11 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:24.802 [2024-11-04 13:59:11.532199] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:24.802 [2024-11-04 13:59:11.532577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:25:24.802 [2024-11-04 13:59:11.711321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.060 [2024-11-04 13:59:11.855269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.628 [2024-11-04 13:59:12.278984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.628 [2024-11-04 13:59:12.279064] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.628 [2024-11-04 13:59:12.449596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.449667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.628 [2024-11-04 13:59:12.449687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:25.628 [2024-11-04 13:59:12.449700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.453908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.453970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.628 [2024-11-04 13:59:12.453991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:25:25.628 [2024-11-04 13:59:12.454007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.454284] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.628 [2024-11-04 13:59:12.455636] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.628 [2024-11-04 13:59:12.455679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.455693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.628 [2024-11-04 13:59:12.455708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:25:25.628 [2024-11-04 13:59:12.455730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.457418] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.628 [2024-11-04 13:59:12.480238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.480324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.628 [2024-11-04 13:59:12.480348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.816 ms 00:25:25.628 [2024-11-04 13:59:12.480365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.480590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.480627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.628 [2024-11-04 13:59:12.480646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:25.628 [2024-11-04 13:59:12.480661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.488944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.489013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.628 [2024-11-04 13:59:12.489034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.211 ms 00:25:25.628 [2024-11-04 13:59:12.489050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.489226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.489247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.628 [2024-11-04 13:59:12.489264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:25.628 [2024-11-04 13:59:12.489280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.489323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.489344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.628 [2024-11-04 13:59:12.489360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:25.628 [2024-11-04 13:59:12.489376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.489410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:25.628 [2024-11-04 13:59:12.495741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.495959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.628 [2024-11-04 13:59:12.495990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.337 ms 00:25:25.628 [2024-11-04 13:59:12.496006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.628 [2024-11-04 13:59:12.496123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.628 [2024-11-04 13:59:12.496146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.629 [2024-11-04 13:59:12.496166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:25.629 [2024-11-04 13:59:12.496182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.629 [2024-11-04 13:59:12.496216] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.629 [2024-11-04 13:59:12.496252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.629 [2024-11-04 13:59:12.496302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.629 [2024-11-04 13:59:12.496328] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:25.629 [2024-11-04 13:59:12.496441] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.629 [2024-11-04 13:59:12.496466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.629 [2024-11-04 13:59:12.496494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:25.629 [2024-11-04 13:59:12.496519] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.629 [2024-11-04 13:59:12.496543] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.629 [2024-11-04 13:59:12.496560] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:25.629 [2024-11-04 13:59:12.496604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.629 [2024-11-04 13:59:12.496625] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.629 [2024-11-04 13:59:12.496644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.629 [2024-11-04 13:59:12.496665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.629 [2024-11-04 13:59:12.496684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.629 [2024-11-04 13:59:12.496707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:25:25.629 [2024-11-04 13:59:12.496729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.629 [2024-11-04 13:59:12.496876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.629 [2024-11-04 13:59:12.496903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.629 [2024-11-04 13:59:12.496935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:25.629 [2024-11-04 13:59:12.496953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.629 [2024-11-04 13:59:12.497072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.629 [2024-11-04 13:59:12.497091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.629 [2024-11-04 13:59:12.497107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.629 [2024-11-04 13:59:12.497161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.629 [2024-11-04 13:59:12.497206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.629 [2024-11-04 13:59:12.497236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.629 [2024-11-04 13:59:12.497251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:25.629 [2024-11-04 13:59:12.497266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.629 [2024-11-04 13:59:12.497294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.629 [2024-11-04 13:59:12.497310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:25.629 [2024-11-04 13:59:12.497325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.629 [2024-11-04 13:59:12.497355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.629 [2024-11-04 13:59:12.497401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.629 [2024-11-04 13:59:12.497445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.629 [2024-11-04 13:59:12.497489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.629 [2024-11-04 13:59:12.497537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.629 [2024-11-04 13:59:12.497597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.629 [2024-11-04 13:59:12.497627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.629 [2024-11-04 13:59:12.497642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:25.629 [2024-11-04 13:59:12.497657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.629 [2024-11-04 13:59:12.497672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.629 [2024-11-04 13:59:12.497687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:25.629 [2024-11-04 13:59:12.497701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.629 [2024-11-04 13:59:12.497730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:25.629 [2024-11-04 13:59:12.497745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.629 [2024-11-04 13:59:12.497775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.629 [2024-11-04 13:59:12.497792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.629 [2024-11-04 13:59:12.497840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.629 [2024-11-04 13:59:12.497855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.629 [2024-11-04 13:59:12.497866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.629 [2024-11-04 13:59:12.497880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.629 [2024-11-04 13:59:12.497891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.629 [2024-11-04 13:59:12.497902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.629 [2024-11-04 13:59:12.497916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.629 [2024-11-04 13:59:12.497931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.497946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:25.629 [2024-11-04 13:59:12.497959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:25.629 [2024-11-04 13:59:12.497971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:25.629 [2024-11-04 13:59:12.497984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:25.629 [2024-11-04 13:59:12.497996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:25.629 [2024-11-04 13:59:12.498008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:25.629 [2024-11-04 13:59:12.498020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:25.629 [2024-11-04 13:59:12.498034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:25.629 [2024-11-04 13:59:12.498046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:25.629 [2024-11-04 13:59:12.498059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:25.629 [2024-11-04 13:59:12.498120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.629 [2024-11-04 13:59:12.498133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.629 [2024-11-04 13:59:12.498159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.629 [2024-11-04 13:59:12.498172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.629 [2024-11-04 13:59:12.498185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.629 [2024-11-04 13:59:12.498198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.630 [2024-11-04 13:59:12.498211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.630 [2024-11-04 13:59:12.498228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:25:25.630 [2024-11-04 13:59:12.498240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.630 [2024-11-04 13:59:12.545806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.630 [2024-11-04 13:59:12.545892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.630 [2024-11-04 13:59:12.545915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.490 ms 00:25:25.630 [2024-11-04 13:59:12.545929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.630 [2024-11-04 13:59:12.546160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.630 [2024-11-04 13:59:12.546178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.630 [2024-11-04 13:59:12.546192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:25.630 [2024-11-04 13:59:12.546204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.638099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.638178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.889 [2024-11-04 13:59:12.638210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.858 ms 00:25:25.889 [2024-11-04 13:59:12.638227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.638395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.638417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.889 [2024-11-04 13:59:12.638435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.889 [2024-11-04 13:59:12.638451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.639034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.639068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.889 [2024-11-04 13:59:12.639086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:25:25.889 [2024-11-04 13:59:12.639113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.639297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.639318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.889 [2024-11-04 13:59:12.639334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:25.889 [2024-11-04 13:59:12.639351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.670155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.670227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.889 [2024-11-04 13:59:12.670251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.765 ms 00:25:25.889 [2024-11-04 13:59:12.670268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.701945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:25.889 [2024-11-04 13:59:12.702029] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.889 [2024-11-04 13:59:12.702056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.702075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.889 [2024-11-04 13:59:12.702095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.540 ms 00:25:25.889 [2024-11-04 13:59:12.702111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.750779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.750875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.889 [2024-11-04 13:59:12.750937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.496 ms 00:25:25.889 [2024-11-04 13:59:12.750956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.889 [2024-11-04 13:59:12.781482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.889 [2024-11-04 13:59:12.781824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.889 [2024-11-04 13:59:12.781864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.327 ms 00:25:25.889 [2024-11-04 13:59:12.781884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.811776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.811876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:26.148 [2024-11-04 13:59:12.811900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.674 ms 00:25:26.148 [2024-11-04 13:59:12.811917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.813317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.813552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:26.148 [2024-11-04 13:59:12.813602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:25:26.148 [2024-11-04 13:59:12.813620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.946217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.946308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:26.148 [2024-11-04 13:59:12.946335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 132.538 ms 00:25:26.148 [2024-11-04 13:59:12.946353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.966910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:26.148 [2024-11-04 13:59:12.989468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.989560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:26.148 [2024-11-04 13:59:12.989608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.856 ms 00:25:26.148 [2024-11-04 13:59:12.989626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.989801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.989828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:26.148 [2024-11-04 13:59:12.989847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:26.148 [2024-11-04 13:59:12.989864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.989937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.989968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:26.148 [2024-11-04 13:59:12.989996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:26.148 [2024-11-04 13:59:12.990021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.990091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.990112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:26.148 [2024-11-04 13:59:12.990134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:26.148 [2024-11-04 13:59:12.990150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:12.990203] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:26.148 [2024-11-04 13:59:12.990236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:12.990253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:26.148 [2024-11-04 13:59:12.990269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:26.148 [2024-11-04 13:59:12.990284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:13.039909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:13.040213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.148 [2024-11-04 13:59:13.040257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.581 ms 00:25:26.148 [2024-11-04 13:59:13.040278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:13.040508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.148 [2024-11-04 13:59:13.040538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.148 [2024-11-04 13:59:13.040562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:26.148 [2024-11-04 13:59:13.040610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.148 [2024-11-04 13:59:13.041981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.148 [2024-11-04 13:59:13.048242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 591.929 ms, result 0 00:25:26.148 [2024-11-04 13:59:13.049035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.406 [2024-11-04 13:59:13.070345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:27.341  [2024-11-04T13:59:15.199Z] Copying: 29/256 [MB] (29 MBps) [2024-11-04T13:59:16.134Z] Copying: 60/256 [MB] (30 MBps) [2024-11-04T13:59:17.511Z] Copying: 93/256 [MB] (33 MBps) [2024-11-04T13:59:18.078Z] Copying: 125/256 [MB] (31 MBps) [2024-11-04T13:59:19.453Z] Copying: 154/256 [MB] (28 MBps) [2024-11-04T13:59:20.387Z] Copying: 187/256 [MB] (33 MBps) [2024-11-04T13:59:21.325Z] Copying: 222/256 [MB] (34 MBps) [2024-11-04T13:59:21.325Z] Copying: 252/256 [MB] (30 MBps) [2024-11-04T13:59:21.325Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-04 13:59:21.172044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.403 [2024-11-04 13:59:21.189901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.190182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.403 [2024-11-04 13:59:21.190228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.403 [2024-11-04 13:59:21.190253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.190320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:34.403 [2024-11-04 13:59:21.195248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.195308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.403 [2024-11-04 13:59:21.195324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.898 ms 00:25:34.403 [2024-11-04 13:59:21.195335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.197247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.197297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.403 [2024-11-04 13:59:21.197314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:25:34.403 [2024-11-04 13:59:21.197328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.204098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.204356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.403 [2024-11-04 13:59:21.204413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.743 ms 00:25:34.403 [2024-11-04 13:59:21.204436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.211453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.211704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.403 [2024-11-04 13:59:21.211746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:25:34.403 [2024-11-04 13:59:21.211771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.257804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.257880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.403 [2024-11-04 13:59:21.257900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.917 ms 00:25:34.403 [2024-11-04 13:59:21.257912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.283560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.283643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.403 [2024-11-04 13:59:21.283670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.520 ms 00:25:34.403 [2024-11-04 13:59:21.283689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.403 [2024-11-04 13:59:21.283880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.403 [2024-11-04 13:59:21.283896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.403 [2024-11-04 13:59:21.283911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:34.403 [2024-11-04 13:59:21.283923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-11-04 13:59:21.330309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-11-04 13:59:21.330391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.663 [2024-11-04 13:59:21.330411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.356 ms 00:25:34.663 [2024-11-04 13:59:21.330424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-11-04 13:59:21.376724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-11-04 13:59:21.376797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:34.663 [2024-11-04 13:59:21.376815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.179 ms 00:25:34.663 [2024-11-04 13:59:21.376827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-11-04 13:59:21.422001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-11-04 13:59:21.422084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.663 [2024-11-04 13:59:21.422104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.065 ms 00:25:34.663 [2024-11-04 13:59:21.422116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-11-04 13:59:21.467885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.663 [2024-11-04 13:59:21.468144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.663 [2024-11-04 13:59:21.468189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.616 ms 00:25:34.663 [2024-11-04 13:59:21.468211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.663 [2024-11-04 13:59:21.468352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.663 [2024-11-04 13:59:21.468400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.468993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.663 [2024-11-04 13:59:21.469810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.469996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.664 [2024-11-04 13:59:21.470547] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.664 [2024-11-04 13:59:21.470596] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:34.664 [2024-11-04 13:59:21.470622] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:34.664 [2024-11-04 13:59:21.470641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:34.664 [2024-11-04 13:59:21.470656] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:34.664 [2024-11-04 13:59:21.470672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:34.664 [2024-11-04 13:59:21.470687] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.664 [2024-11-04 13:59:21.470703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.664 [2024-11-04 13:59:21.470719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.664 [2024-11-04 13:59:21.470737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.664 [2024-11-04 13:59:21.470759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.664 [2024-11-04 13:59:21.470783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-11-04 13:59:21.470811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.664 [2024-11-04 13:59:21.470848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:25:34.664 [2024-11-04 13:59:21.470870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.495749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-11-04 13:59:21.495803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.664 [2024-11-04 13:59:21.495820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:25:34.664 [2024-11-04 13:59:21.495833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.496484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.664 [2024-11-04 13:59:21.496555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.664 [2024-11-04 13:59:21.496617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:25:34.664 [2024-11-04 13:59:21.496631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.564423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.664 [2024-11-04 13:59:21.564494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.664 [2024-11-04 13:59:21.564514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.664 [2024-11-04 13:59:21.564528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.564665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.664 [2024-11-04 13:59:21.564688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.664 [2024-11-04 13:59:21.564702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.664 [2024-11-04 13:59:21.564713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.564798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.664 [2024-11-04 13:59:21.564825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.664 [2024-11-04 13:59:21.564857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.664 [2024-11-04 13:59:21.564896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.664 [2024-11-04 13:59:21.564939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.664 [2024-11-04 13:59:21.564962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.664 [2024-11-04 13:59:21.564989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.664 [2024-11-04 13:59:21.565018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.923 [2024-11-04 13:59:21.711265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.711356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.924 [2024-11-04 13:59:21.711377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.711389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.835931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.924 [2024-11-04 13:59:21.836038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.924 [2024-11-04 13:59:21.836197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.924 [2024-11-04 13:59:21.836270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.924 [2024-11-04 13:59:21.836452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:34.924 [2024-11-04 13:59:21.836537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.924 [2024-11-04 13:59:21.836656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.924 [2024-11-04 13:59:21.836736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.924 [2024-11-04 13:59:21.836749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.924 [2024-11-04 13:59:21.836765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.924 [2024-11-04 13:59:21.836934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 647.036 ms, result 0 00:25:36.826 00:25:36.826 00:25:36.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.826 13:59:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76793 00:25:36.826 13:59:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:36.826 13:59:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76793 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76793 ']' 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.826 13:59:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:36.826 [2024-11-04 13:59:23.442280] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:36.826 [2024-11-04 13:59:23.442417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76793 ] 00:25:36.826 [2024-11-04 13:59:23.624181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.085 [2024-11-04 13:59:23.757899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.019 13:59:24 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.019 13:59:24 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:25:38.019 13:59:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:38.277 [2024-11-04 13:59:25.092980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.278 [2024-11-04 13:59:25.093058] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.536 [2024-11-04 13:59:25.287001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.536 [2024-11-04 13:59:25.287068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:38.536 [2024-11-04 13:59:25.287094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:38.536 [2024-11-04 13:59:25.287108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.536 [2024-11-04 13:59:25.291635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.536 [2024-11-04 13:59:25.291855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.536 [2024-11-04 13:59:25.291887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:25:38.536 [2024-11-04 13:59:25.291901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.536 [2024-11-04 13:59:25.292228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:38.536 [2024-11-04 13:59:25.293449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:38.536 [2024-11-04 13:59:25.293493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.536 [2024-11-04 13:59:25.293507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.536 [2024-11-04 13:59:25.293523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:25:38.536 [2024-11-04 13:59:25.293536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.536 [2024-11-04 13:59:25.295148] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:38.536 [2024-11-04 13:59:25.318200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.536 [2024-11-04 13:59:25.318267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:38.536 [2024-11-04 13:59:25.318288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.054 ms 00:25:38.536 [2024-11-04 13:59:25.318304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.536 [2024-11-04 13:59:25.318451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.536 [2024-11-04 13:59:25.318471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:38.536 [2024-11-04 13:59:25.318486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:38.536 [2024-11-04 13:59:25.318501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.536 [2024-11-04 13:59:25.326156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.326222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.537 [2024-11-04 13:59:25.326238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.590 ms 00:25:38.537 [2024-11-04 13:59:25.326257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.326436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.326460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.537 [2024-11-04 13:59:25.326474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:25:38.537 [2024-11-04 13:59:25.326500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.326554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.326593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:38.537 [2024-11-04 13:59:25.326607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:38.537 [2024-11-04 13:59:25.326625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.326658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:38.537 [2024-11-04 13:59:25.332144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.332184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.537 [2024-11-04 13:59:25.332205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.487 ms 00:25:38.537 [2024-11-04 13:59:25.332219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.332319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.332334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:38.537 [2024-11-04 13:59:25.332362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:38.537 [2024-11-04 13:59:25.332375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.332411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:38.537 [2024-11-04 13:59:25.332441] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:38.537 [2024-11-04 13:59:25.332503] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:38.537 [2024-11-04 13:59:25.332528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:38.537 [2024-11-04 13:59:25.332680] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:38.537 [2024-11-04 13:59:25.332704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:38.537 [2024-11-04 13:59:25.332729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:38.537 [2024-11-04 13:59:25.332745] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:38.537 [2024-11-04 13:59:25.332766] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:38.537 [2024-11-04 13:59:25.332780] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:38.537 [2024-11-04 13:59:25.332798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:38.537 [2024-11-04 13:59:25.332811] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:38.537 [2024-11-04 13:59:25.332833] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:38.537 [2024-11-04 13:59:25.332858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.332877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:38.537 [2024-11-04 13:59:25.332890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:25:38.537 [2024-11-04 13:59:25.332915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.333009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.537 [2024-11-04 13:59:25.333039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:38.537 [2024-11-04 13:59:25.333052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:38.537 [2024-11-04 13:59:25.333067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.537 [2024-11-04 13:59:25.333185] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:38.537 [2024-11-04 13:59:25.333206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:38.537 [2024-11-04 13:59:25.333219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:38.537 [2024-11-04 13:59:25.333265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:38.537 [2024-11-04 13:59:25.333309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.537 [2024-11-04 13:59:25.333335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:38.537 [2024-11-04 13:59:25.333349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:38.537 [2024-11-04 13:59:25.333360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.537 [2024-11-04 13:59:25.333375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:38.537 [2024-11-04 13:59:25.333387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:38.537 [2024-11-04 13:59:25.333400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:38.537 [2024-11-04 13:59:25.333426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:38.537 [2024-11-04 13:59:25.333474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:38.537 [2024-11-04 13:59:25.333521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:38.537 [2024-11-04 13:59:25.333558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:38.537 [2024-11-04 13:59:25.333610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:38.537 [2024-11-04 13:59:25.333649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.537 [2024-11-04 13:59:25.333676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:38.537 [2024-11-04 13:59:25.333690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:38.537 [2024-11-04 13:59:25.333702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.537 [2024-11-04 13:59:25.333716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:38.537 [2024-11-04 13:59:25.333727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:38.537 [2024-11-04 13:59:25.333744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:38.537 [2024-11-04 13:59:25.333770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:38.537 [2024-11-04 13:59:25.333781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:38.537 [2024-11-04 13:59:25.333819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:38.537 [2024-11-04 13:59:25.333851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.537 [2024-11-04 13:59:25.333863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.537 [2024-11-04 13:59:25.333878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:38.537 [2024-11-04 13:59:25.333890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:38.538 [2024-11-04 13:59:25.333904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:38.538 [2024-11-04 13:59:25.333916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:38.538 [2024-11-04 13:59:25.333934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:38.538 [2024-11-04 13:59:25.333946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:38.538 [2024-11-04 13:59:25.333965] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:38.538 [2024-11-04 13:59:25.333981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:38.538 [2024-11-04 13:59:25.334020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:38.538 [2024-11-04 13:59:25.334041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:38.538 [2024-11-04 13:59:25.334054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:38.538 [2024-11-04 13:59:25.334073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:38.538 [2024-11-04 13:59:25.334086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:38.538 [2024-11-04 13:59:25.334104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:38.538 [2024-11-04 13:59:25.334117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:38.538 [2024-11-04 13:59:25.334135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:38.538 [2024-11-04 13:59:25.334149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:38.538 [2024-11-04 13:59:25.334230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:38.538 [2024-11-04 13:59:25.334245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:38.538 [2024-11-04 13:59:25.334283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:38.538 [2024-11-04 13:59:25.334301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:38.538 [2024-11-04 13:59:25.334314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:38.538 [2024-11-04 13:59:25.334333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.334346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:38.538 [2024-11-04 13:59:25.334365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.210 ms 00:25:38.538 [2024-11-04 13:59:25.334384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.383494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.383787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.538 [2024-11-04 13:59:25.383921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.023 ms 00:25:38.538 [2024-11-04 13:59:25.383970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.384279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.384416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:38.538 [2024-11-04 13:59:25.384526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:38.538 [2024-11-04 13:59:25.384642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.442382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.442663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.538 [2024-11-04 13:59:25.442773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.648 ms 00:25:38.538 [2024-11-04 13:59:25.442821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.442988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.443103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.538 [2024-11-04 13:59:25.443160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:38.538 [2024-11-04 13:59:25.443200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.443730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.443870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.538 [2024-11-04 13:59:25.443967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:25:38.538 [2024-11-04 13:59:25.444011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.538 [2024-11-04 13:59:25.444249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.538 [2024-11-04 13:59:25.444303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.538 [2024-11-04 13:59:25.444350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:25:38.538 [2024-11-04 13:59:25.444449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.470165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.470383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.796 [2024-11-04 13:59:25.470492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.623 ms 00:25:38.796 [2024-11-04 13:59:25.470540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.493838] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:38.796 [2024-11-04 13:59:25.494067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:38.796 [2024-11-04 13:59:25.494264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.494309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:38.796 [2024-11-04 13:59:25.494352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.499 ms 00:25:38.796 [2024-11-04 13:59:25.494389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.531030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.531231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:38.796 [2024-11-04 13:59:25.531335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.410 ms 00:25:38.796 [2024-11-04 13:59:25.531380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.554359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.554563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:38.796 [2024-11-04 13:59:25.554801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.806 ms 00:25:38.796 [2024-11-04 13:59:25.554850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.577344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.577564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:38.796 [2024-11-04 13:59:25.577688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.351 ms 00:25:38.796 [2024-11-04 13:59:25.577735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.578753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.578900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:38.796 [2024-11-04 13:59:25.578996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:25:38.796 [2024-11-04 13:59:25.579039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.699284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.796 [2024-11-04 13:59:25.699351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:38.796 [2024-11-04 13:59:25.699378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.190 ms 00:25:38.796 [2024-11-04 13:59:25.699392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.796 [2024-11-04 13:59:25.713631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:39.055 [2024-11-04 13:59:25.732340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.055 [2024-11-04 13:59:25.732442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:39.055 [2024-11-04 13:59:25.732462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.792 ms 00:25:39.055 [2024-11-04 13:59:25.732481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.055 [2024-11-04 13:59:25.732658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.055 [2024-11-04 13:59:25.732684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:39.055 [2024-11-04 13:59:25.732699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:39.055 [2024-11-04 13:59:25.732717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.055 [2024-11-04 13:59:25.732780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.055 [2024-11-04 13:59:25.732801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:39.055 [2024-11-04 13:59:25.732814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:39.055 [2024-11-04 13:59:25.732848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.055 [2024-11-04 13:59:25.732879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.055 [2024-11-04 13:59:25.732898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:39.055 [2024-11-04 13:59:25.732911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:39.055 [2024-11-04 13:59:25.732929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.055 [2024-11-04 13:59:25.732980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:39.056 [2024-11-04 13:59:25.733003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.056 [2024-11-04 13:59:25.733019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:39.056 [2024-11-04 13:59:25.733035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:39.056 [2024-11-04 13:59:25.733047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.056 [2024-11-04 13:59:25.778927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.056 [2024-11-04 13:59:25.779009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:39.056 [2024-11-04 13:59:25.779033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.829 ms 00:25:39.056 [2024-11-04 13:59:25.779049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.056 [2024-11-04 13:59:25.779249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.056 [2024-11-04 13:59:25.779267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:39.056 [2024-11-04 13:59:25.779289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:39.056 [2024-11-04 13:59:25.779301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.056 [2024-11-04 13:59:25.780652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:39.056 [2024-11-04 13:59:25.786540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 493.243 ms, result 0 00:25:39.056 [2024-11-04 13:59:25.787742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:39.056 Some configs were skipped because the RPC state that can call them passed over. 00:25:39.056 13:59:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:39.314 [2024-11-04 13:59:26.203826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.314 [2024-11-04 13:59:26.204112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:39.314 [2024-11-04 13:59:26.204239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:25:39.314 [2024-11-04 13:59:26.204300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.314 [2024-11-04 13:59:26.204463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.042 ms, result 0 00:25:39.314 true 00:25:39.314 13:59:26 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:39.572 [2024-11-04 13:59:26.449184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.572 [2024-11-04 13:59:26.449446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:39.572 [2024-11-04 13:59:26.449606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:25:39.572 [2024-11-04 13:59:26.449728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.572 [2024-11-04 13:59:26.449868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.784 ms, result 0 00:25:39.572 true 00:25:39.572 13:59:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76793 00:25:39.572 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76793 ']' 00:25:39.572 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76793 00:25:39.572 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:25:39.572 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:39.572 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76793 00:25:39.830 killing process with pid 76793 00:25:39.830 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:39.830 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:39.830 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76793' 00:25:39.830 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76793 00:25:39.830 13:59:26 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76793 00:25:41.205 [2024-11-04 13:59:27.837127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.205 [2024-11-04 13:59:27.837209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.205 [2024-11-04 13:59:27.837230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:41.205 [2024-11-04 13:59:27.837246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.205 [2024-11-04 13:59:27.837280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:41.205 [2024-11-04 13:59:27.842140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.205 [2024-11-04 13:59:27.842177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.205 [2024-11-04 13:59:27.842197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:25:41.205 [2024-11-04 13:59:27.842209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.205 [2024-11-04 13:59:27.842509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.205 [2024-11-04 13:59:27.842524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.205 [2024-11-04 13:59:27.842539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:25:41.206 [2024-11-04 13:59:27.842550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.846357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.846399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.206 [2024-11-04 13:59:27.846421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:25:41.206 [2024-11-04 13:59:27.846433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.853221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.853261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.206 [2024-11-04 13:59:27.853279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:25:41.206 [2024-11-04 13:59:27.853291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.871200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.871248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.206 [2024-11-04 13:59:27.871271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.818 ms 00:25:41.206 [2024-11-04 13:59:27.871295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.882860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.882920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.206 [2024-11-04 13:59:27.882941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.496 ms 00:25:41.206 [2024-11-04 13:59:27.882953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.883107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.883122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.206 [2024-11-04 13:59:27.883138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:41.206 [2024-11-04 13:59:27.883149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.900457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.900727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.206 [2024-11-04 13:59:27.900763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.275 ms 00:25:41.206 [2024-11-04 13:59:27.900776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.918060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.918114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.206 [2024-11-04 13:59:27.918138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.173 ms 00:25:41.206 [2024-11-04 13:59:27.918149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.935356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.935398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.206 [2024-11-04 13:59:27.935419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.142 ms 00:25:41.206 [2024-11-04 13:59:27.935431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.953035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.206 [2024-11-04 13:59:27.953080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.206 [2024-11-04 13:59:27.953100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.489 ms 00:25:41.206 [2024-11-04 13:59:27.953113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.206 [2024-11-04 13:59:27.953175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.206 [2024-11-04 13:59:27.953197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.953993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.954010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.954023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.954038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.954050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.206 [2024-11-04 13:59:27.954064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.207 [2024-11-04 13:59:27.954684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.207 [2024-11-04 13:59:27.954723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:41.207 [2024-11-04 13:59:27.954759] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:41.207 [2024-11-04 13:59:27.954779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:41.207 [2024-11-04 13:59:27.954791] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:41.207 [2024-11-04 13:59:27.954808] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:41.207 [2024-11-04 13:59:27.954820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.207 [2024-11-04 13:59:27.954839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.207 [2024-11-04 13:59:27.954851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.207 [2024-11-04 13:59:27.954868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.207 [2024-11-04 13:59:27.954879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.207 [2024-11-04 13:59:27.954897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.207 [2024-11-04 13:59:27.954910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.207 [2024-11-04 13:59:27.954927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:25:41.207 [2024-11-04 13:59:27.954946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:27.978191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.207 [2024-11-04 13:59:27.978232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.207 [2024-11-04 13:59:27.978253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.208 ms 00:25:41.207 [2024-11-04 13:59:27.978264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:27.978899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.207 [2024-11-04 13:59:27.979074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.207 [2024-11-04 13:59:27.979109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:25:41.207 [2024-11-04 13:59:27.979121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:28.057442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.207 [2024-11-04 13:59:28.057510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.207 [2024-11-04 13:59:28.057532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.207 [2024-11-04 13:59:28.057545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:28.057732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.207 [2024-11-04 13:59:28.057748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.207 [2024-11-04 13:59:28.057769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.207 [2024-11-04 13:59:28.057781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:28.057849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.207 [2024-11-04 13:59:28.057865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.207 [2024-11-04 13:59:28.057885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.207 [2024-11-04 13:59:28.057896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.207 [2024-11-04 13:59:28.057924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.207 [2024-11-04 13:59:28.057936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.207 [2024-11-04 13:59:28.057952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.207 [2024-11-04 13:59:28.057967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.203631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.203689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.465 [2024-11-04 13:59:28.203709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.203721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.465 [2024-11-04 13:59:28.310367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.465 [2024-11-04 13:59:28.310534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.465 [2024-11-04 13:59:28.310632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.465 [2024-11-04 13:59:28.310810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.465 [2024-11-04 13:59:28.310890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.310949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.310960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.465 [2024-11-04 13:59:28.310976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.465 [2024-11-04 13:59:28.310987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.465 [2024-11-04 13:59:28.311035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.465 [2024-11-04 13:59:28.311048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.465 [2024-11-04 13:59:28.311061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.466 [2024-11-04 13:59:28.311071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.466 [2024-11-04 13:59:28.311215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.066 ms, result 0 00:25:42.840 13:59:29 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:42.840 13:59:29 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.840 [2024-11-04 13:59:29.617316] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:42.840 [2024-11-04 13:59:29.617835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76873 ] 00:25:43.098 [2024-11-04 13:59:29.823909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.098 [2024-11-04 13:59:30.002008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.665 [2024-11-04 13:59:30.437931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.665 [2024-11-04 13:59:30.438033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.924 [2024-11-04 13:59:30.612200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.924 [2024-11-04 13:59:30.612295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.924 [2024-11-04 13:59:30.612317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.924 [2024-11-04 13:59:30.612331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.924 [2024-11-04 13:59:30.615949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.924 [2024-11-04 13:59:30.616010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.924 [2024-11-04 13:59:30.616027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:25:43.924 [2024-11-04 13:59:30.616040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.924 [2024-11-04 13:59:30.616191] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.924 [2024-11-04 13:59:30.617332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.924 [2024-11-04 13:59:30.617377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.924 [2024-11-04 13:59:30.617392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.924 [2024-11-04 13:59:30.617412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:25:43.924 [2024-11-04 13:59:30.617426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.924 [2024-11-04 13:59:30.619094] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.924 [2024-11-04 13:59:30.639973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.924 [2024-11-04 13:59:30.640084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.925 [2024-11-04 13:59:30.640105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.868 ms 00:25:43.925 [2024-11-04 13:59:30.640118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.640341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.640362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.925 [2024-11-04 13:59:30.640377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:43.925 [2024-11-04 13:59:30.640390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.648377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.648442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.925 [2024-11-04 13:59:30.648460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.923 ms 00:25:43.925 [2024-11-04 13:59:30.648473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.648688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.648710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.925 [2024-11-04 13:59:30.648726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:43.925 [2024-11-04 13:59:30.648739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.648798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.648818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.925 [2024-11-04 13:59:30.648832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:43.925 [2024-11-04 13:59:30.648855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.648912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:43.925 [2024-11-04 13:59:30.654165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.654222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.925 [2024-11-04 13:59:30.654238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:25:43.925 [2024-11-04 13:59:30.654251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.654368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.654384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.925 [2024-11-04 13:59:30.654398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:43.925 [2024-11-04 13:59:30.654410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.654440] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.925 [2024-11-04 13:59:30.654474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.925 [2024-11-04 13:59:30.654514] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.925 [2024-11-04 13:59:30.654536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.925 [2024-11-04 13:59:30.654651] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.925 [2024-11-04 13:59:30.654673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.925 [2024-11-04 13:59:30.654689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.925 [2024-11-04 13:59:30.654705] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.925 [2024-11-04 13:59:30.654724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.925 [2024-11-04 13:59:30.654738] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:43.925 [2024-11-04 13:59:30.654750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.925 [2024-11-04 13:59:30.654762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.925 [2024-11-04 13:59:30.654774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.925 [2024-11-04 13:59:30.654786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.654799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.925 [2024-11-04 13:59:30.654811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:25:43.925 [2024-11-04 13:59:30.654823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.654908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.925 [2024-11-04 13:59:30.654928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.925 [2024-11-04 13:59:30.654945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:43.925 [2024-11-04 13:59:30.654957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.925 [2024-11-04 13:59:30.655055] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.925 [2024-11-04 13:59:30.655071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.925 [2024-11-04 13:59:30.655084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.925 [2024-11-04 13:59:30.655121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.925 [2024-11-04 13:59:30.655155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.925 [2024-11-04 13:59:30.655178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.925 [2024-11-04 13:59:30.655189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:43.925 [2024-11-04 13:59:30.655201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.925 [2024-11-04 13:59:30.655228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.925 [2024-11-04 13:59:30.655240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:43.925 [2024-11-04 13:59:30.655252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.925 [2024-11-04 13:59:30.655275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.925 [2024-11-04 13:59:30.655309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.925 [2024-11-04 13:59:30.655343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.925 [2024-11-04 13:59:30.655378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.925 [2024-11-04 13:59:30.655412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.925 [2024-11-04 13:59:30.655446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.925 [2024-11-04 13:59:30.655471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.925 [2024-11-04 13:59:30.655483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:43.925 [2024-11-04 13:59:30.655495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.925 [2024-11-04 13:59:30.655506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.925 [2024-11-04 13:59:30.655517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:43.925 [2024-11-04 13:59:30.655528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.925 [2024-11-04 13:59:30.655551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:43.925 [2024-11-04 13:59:30.655562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.925 [2024-11-04 13:59:30.655599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.925 [2024-11-04 13:59:30.655612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.925 [2024-11-04 13:59:30.655652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.925 [2024-11-04 13:59:30.655664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.925 [2024-11-04 13:59:30.655675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.925 [2024-11-04 13:59:30.655687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.925 [2024-11-04 13:59:30.655698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.925 [2024-11-04 13:59:30.655710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.925 [2024-11-04 13:59:30.655723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.925 [2024-11-04 13:59:30.655738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.925 [2024-11-04 13:59:30.655751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:43.925 [2024-11-04 13:59:30.655764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:43.926 [2024-11-04 13:59:30.655777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:43.926 [2024-11-04 13:59:30.655789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:43.926 [2024-11-04 13:59:30.655802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:43.926 [2024-11-04 13:59:30.655814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:43.926 [2024-11-04 13:59:30.655827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:43.926 [2024-11-04 13:59:30.655839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:43.926 [2024-11-04 13:59:30.655852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:43.926 [2024-11-04 13:59:30.655866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:43.926 [2024-11-04 13:59:30.655930] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.926 [2024-11-04 13:59:30.655944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.926 [2024-11-04 13:59:30.655970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.926 [2024-11-04 13:59:30.655982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.926 [2024-11-04 13:59:30.655995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.926 [2024-11-04 13:59:30.656009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.656022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.926 [2024-11-04 13:59:30.656041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:25:43.926 [2024-11-04 13:59:30.656053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.699796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.700026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.926 [2024-11-04 13:59:30.700122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.671 ms 00:25:43.926 [2024-11-04 13:59:30.700166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.700460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.700633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.926 [2024-11-04 13:59:30.700726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:43.926 [2024-11-04 13:59:30.700769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.763168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.763474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.926 [2024-11-04 13:59:30.763586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.051 ms 00:25:43.926 [2024-11-04 13:59:30.763640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.763849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.764078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.926 [2024-11-04 13:59:30.764127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.926 [2024-11-04 13:59:30.764163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.764706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.764865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.926 [2024-11-04 13:59:30.764890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:25:43.926 [2024-11-04 13:59:30.764912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.765066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.765090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.926 [2024-11-04 13:59:30.765105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:43.926 [2024-11-04 13:59:30.765119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.786342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.786412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.926 [2024-11-04 13:59:30.786431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.189 ms 00:25:43.926 [2024-11-04 13:59:30.786444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.808444] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:43.926 [2024-11-04 13:59:30.808552] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.926 [2024-11-04 13:59:30.808595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.808609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.926 [2024-11-04 13:59:30.808626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.931 ms 00:25:43.926 [2024-11-04 13:59:30.808639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.926 [2024-11-04 13:59:30.841400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.926 [2024-11-04 13:59:30.841813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.926 [2024-11-04 13:59:30.841847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.564 ms 00:25:43.926 [2024-11-04 13:59:30.841863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:30.863126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:30.863221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:44.186 [2024-11-04 13:59:30.863242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.076 ms 00:25:44.186 [2024-11-04 13:59:30.863258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:30.884903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:30.885274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:44.186 [2024-11-04 13:59:30.885307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.461 ms 00:25:44.186 [2024-11-04 13:59:30.885323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:30.886422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:30.886468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:44.186 [2024-11-04 13:59:30.886485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:25:44.186 [2024-11-04 13:59:30.886501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:30.982349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:30.982445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:44.186 [2024-11-04 13:59:30.982466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.791 ms 00:25:44.186 [2024-11-04 13:59:30.982480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:30.997675] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:44.186 [2024-11-04 13:59:31.016667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.016758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:44.186 [2024-11-04 13:59:31.016792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.937 ms 00:25:44.186 [2024-11-04 13:59:31.016819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.017072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.017104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:44.186 [2024-11-04 13:59:31.017132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:44.186 [2024-11-04 13:59:31.017154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.017269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.017296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:44.186 [2024-11-04 13:59:31.017323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:44.186 [2024-11-04 13:59:31.017347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.017421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.017454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:44.186 [2024-11-04 13:59:31.017481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:44.186 [2024-11-04 13:59:31.017506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.017607] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:44.186 [2024-11-04 13:59:31.017638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.017661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:44.186 [2024-11-04 13:59:31.017687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:44.186 [2024-11-04 13:59:31.017711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.060297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.060393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:44.186 [2024-11-04 13:59:31.060415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.521 ms 00:25:44.186 [2024-11-04 13:59:31.060428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.060707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.186 [2024-11-04 13:59:31.060727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:44.186 [2024-11-04 13:59:31.060743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:44.186 [2024-11-04 13:59:31.060757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.186 [2024-11-04 13:59:31.061987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.186 [2024-11-04 13:59:31.068082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 449.337 ms, result 0 00:25:44.186 [2024-11-04 13:59:31.069235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.186 [2024-11-04 13:59:31.090295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:45.565  [2024-11-04T13:59:33.423Z] Copying: 29/256 [MB] (29 MBps) [2024-11-04T13:59:34.359Z] Copying: 57/256 [MB] (27 MBps) [2024-11-04T13:59:35.301Z] Copying: 83/256 [MB] (26 MBps) [2024-11-04T13:59:36.238Z] Copying: 114/256 [MB] (30 MBps) [2024-11-04T13:59:37.172Z] Copying: 144/256 [MB] (30 MBps) [2024-11-04T13:59:38.107Z] Copying: 175/256 [MB] (30 MBps) [2024-11-04T13:59:39.544Z] Copying: 205/256 [MB] (30 MBps) [2024-11-04T13:59:40.132Z] Copying: 234/256 [MB] (29 MBps) [2024-11-04T13:59:40.132Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-04 13:59:39.842608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.210 [2024-11-04 13:59:39.858785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.858855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.210 [2024-11-04 13:59:39.858873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:53.210 [2024-11-04 13:59:39.858906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.858936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:53.210 [2024-11-04 13:59:39.863440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.863485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.210 [2024-11-04 13:59:39.863499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:25:53.210 [2024-11-04 13:59:39.863511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.863797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.863812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.210 [2024-11-04 13:59:39.863824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:25:53.210 [2024-11-04 13:59:39.863835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.866980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.867020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.210 [2024-11-04 13:59:39.867032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:25:53.210 [2024-11-04 13:59:39.867043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.872945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.873146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.210 [2024-11-04 13:59:39.873172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.876 ms 00:25:53.210 [2024-11-04 13:59:39.873183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.914899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.914975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.210 [2024-11-04 13:59:39.914994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.602 ms 00:25:53.210 [2024-11-04 13:59:39.915006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.939234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.939336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.210 [2024-11-04 13:59:39.939354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.138 ms 00:25:53.210 [2024-11-04 13:59:39.939373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.939591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.939608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.210 [2024-11-04 13:59:39.939620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:53.210 [2024-11-04 13:59:39.939631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:39.984208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:39.984281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.210 [2024-11-04 13:59:39.984299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.535 ms 00:25:53.210 [2024-11-04 13:59:39.984310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:40.027112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:40.027190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.210 [2024-11-04 13:59:40.027210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.716 ms 00:25:53.210 [2024-11-04 13:59:40.027222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:40.069580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:40.069654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.210 [2024-11-04 13:59:40.069671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.259 ms 00:25:53.210 [2024-11-04 13:59:40.069682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:40.112555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-04 13:59:40.112798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.210 [2024-11-04 13:59:40.112853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.739 ms 00:25:53.210 [2024-11-04 13:59:40.112867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-04 13:59:40.112983] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.210 [2024-11-04 13:59:40.113004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.210 [2024-11-04 13:59:40.113167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.113991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.211 [2024-11-04 13:59:40.114324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.211 [2024-11-04 13:59:40.114334] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:53.212 [2024-11-04 13:59:40.114346] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:53.212 [2024-11-04 13:59:40.114356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:53.212 [2024-11-04 13:59:40.114366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:53.212 [2024-11-04 13:59:40.114377] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:53.212 [2024-11-04 13:59:40.114387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.212 [2024-11-04 13:59:40.114398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.212 [2024-11-04 13:59:40.114409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.212 [2024-11-04 13:59:40.114418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.212 [2024-11-04 13:59:40.114428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.212 [2024-11-04 13:59:40.114438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.212 [2024-11-04 13:59:40.114454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.212 [2024-11-04 13:59:40.114465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:25:53.212 [2024-11-04 13:59:40.114476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.136527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.471 [2024-11-04 13:59:40.136608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.471 [2024-11-04 13:59:40.136627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.021 ms 00:25:53.471 [2024-11-04 13:59:40.136640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.137381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.471 [2024-11-04 13:59:40.137408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.471 [2024-11-04 13:59:40.137422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:25:53.471 [2024-11-04 13:59:40.137434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.200709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.471 [2024-11-04 13:59:40.201001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.471 [2024-11-04 13:59:40.201029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.471 [2024-11-04 13:59:40.201043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.201182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.471 [2024-11-04 13:59:40.201195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.471 [2024-11-04 13:59:40.201217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.471 [2024-11-04 13:59:40.201228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.201295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.471 [2024-11-04 13:59:40.201311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.471 [2024-11-04 13:59:40.201322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.471 [2024-11-04 13:59:40.201334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.201355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.471 [2024-11-04 13:59:40.201376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.471 [2024-11-04 13:59:40.201388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.471 [2024-11-04 13:59:40.201399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.471 [2024-11-04 13:59:40.333149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.471 [2024-11-04 13:59:40.333400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.471 [2024-11-04 13:59:40.333428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.471 [2024-11-04 13:59:40.333440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.446707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.446998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.730 [2024-11-04 13:59:40.447152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.447250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.447402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.447513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.730 [2024-11-04 13:59:40.447560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.447663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.447740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.447778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.730 [2024-11-04 13:59:40.447891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.447934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.448103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.448155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.730 [2024-11-04 13:59:40.448246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.448365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.448460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.448596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.730 [2024-11-04 13:59:40.448643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.448735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.448817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.448876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.730 [2024-11-04 13:59:40.448955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.449036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.449128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.730 [2024-11-04 13:59:40.449295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.730 [2024-11-04 13:59:40.449354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.730 [2024-11-04 13:59:40.449390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.730 [2024-11-04 13:59:40.449607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.806 ms, result 0 00:25:54.663 00:25:54.663 00:25:54.663 13:59:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:54.922 13:59:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:55.489 13:59:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:55.489 [2024-11-04 13:59:42.241138] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:55.489 [2024-11-04 13:59:42.241544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77001 ] 00:25:55.748 [2024-11-04 13:59:42.415146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.748 [2024-11-04 13:59:42.542653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.315 [2024-11-04 13:59:42.932664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.315 [2024-11-04 13:59:42.933057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.315 [2024-11-04 13:59:43.100198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.315 [2024-11-04 13:59:43.100288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.315 [2024-11-04 13:59:43.100308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.315 [2024-11-04 13:59:43.100321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.315 [2024-11-04 13:59:43.104084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.104166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.316 [2024-11-04 13:59:43.104182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:25:56.316 [2024-11-04 13:59:43.104194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.104376] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.316 [2024-11-04 13:59:43.105550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.316 [2024-11-04 13:59:43.105608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.105622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.316 [2024-11-04 13:59:43.105636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:25:56.316 [2024-11-04 13:59:43.105648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.107258] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.316 [2024-11-04 13:59:43.128690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.128777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.316 [2024-11-04 13:59:43.128797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.429 ms 00:25:56.316 [2024-11-04 13:59:43.128809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.129031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.129050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.316 [2024-11-04 13:59:43.129064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:56.316 [2024-11-04 13:59:43.129076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.137057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.137110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.316 [2024-11-04 13:59:43.137127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.926 ms 00:25:56.316 [2024-11-04 13:59:43.137140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.137288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.137308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.316 [2024-11-04 13:59:43.137322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:56.316 [2024-11-04 13:59:43.137334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.137371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.137389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.316 [2024-11-04 13:59:43.137401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:56.316 [2024-11-04 13:59:43.137413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.137453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:56.316 [2024-11-04 13:59:43.143062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.143119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.316 [2024-11-04 13:59:43.143135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.616 ms 00:25:56.316 [2024-11-04 13:59:43.143146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.143257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.143272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.316 [2024-11-04 13:59:43.143284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:56.316 [2024-11-04 13:59:43.143295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.143321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.316 [2024-11-04 13:59:43.143350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.316 [2024-11-04 13:59:43.143387] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.316 [2024-11-04 13:59:43.143406] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.316 [2024-11-04 13:59:43.143498] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.316 [2024-11-04 13:59:43.143512] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.316 [2024-11-04 13:59:43.143526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.316 [2024-11-04 13:59:43.143539] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.316 [2024-11-04 13:59:43.143556] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.316 [2024-11-04 13:59:43.143587] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:56.316 [2024-11-04 13:59:43.143599] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.316 [2024-11-04 13:59:43.143609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.316 [2024-11-04 13:59:43.143620] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.316 [2024-11-04 13:59:43.143630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.143641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.316 [2024-11-04 13:59:43.143652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:25:56.316 [2024-11-04 13:59:43.143662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.143741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.316 [2024-11-04 13:59:43.143755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.316 [2024-11-04 13:59:43.143769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:56.316 [2024-11-04 13:59:43.143779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.316 [2024-11-04 13:59:43.143871] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.316 [2024-11-04 13:59:43.143884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.316 [2024-11-04 13:59:43.143895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.316 [2024-11-04 13:59:43.143906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.316 [2024-11-04 13:59:43.143917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.316 [2024-11-04 13:59:43.143926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.316 [2024-11-04 13:59:43.143936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:56.316 [2024-11-04 13:59:43.143946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.316 [2024-11-04 13:59:43.143956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:56.316 [2024-11-04 13:59:43.143965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.316 [2024-11-04 13:59:43.143975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.316 [2024-11-04 13:59:43.143984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:56.316 [2024-11-04 13:59:43.143994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.316 [2024-11-04 13:59:43.144017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.316 [2024-11-04 13:59:43.144027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:56.316 [2024-11-04 13:59:43.144053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.316 [2024-11-04 13:59:43.144074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:56.316 [2024-11-04 13:59:43.144084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.316 [2024-11-04 13:59:43.144105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.316 [2024-11-04 13:59:43.144125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.316 [2024-11-04 13:59:43.144135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.316 [2024-11-04 13:59:43.144156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.316 [2024-11-04 13:59:43.144166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.316 [2024-11-04 13:59:43.144186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.316 [2024-11-04 13:59:43.144197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.316 [2024-11-04 13:59:43.144216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.316 [2024-11-04 13:59:43.144226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.316 [2024-11-04 13:59:43.144264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.316 [2024-11-04 13:59:43.144275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:56.316 [2024-11-04 13:59:43.144285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.316 [2024-11-04 13:59:43.144296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.316 [2024-11-04 13:59:43.144307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:56.316 [2024-11-04 13:59:43.144317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.316 [2024-11-04 13:59:43.144328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.316 [2024-11-04 13:59:43.144339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:56.316 [2024-11-04 13:59:43.144349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.317 [2024-11-04 13:59:43.144360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.317 [2024-11-04 13:59:43.144372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.317 [2024-11-04 13:59:43.144383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.317 [2024-11-04 13:59:43.144399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.317 [2024-11-04 13:59:43.144410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.317 [2024-11-04 13:59:43.144421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.317 [2024-11-04 13:59:43.144432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.317 [2024-11-04 13:59:43.144443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.317 [2024-11-04 13:59:43.144454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.317 [2024-11-04 13:59:43.144465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.317 [2024-11-04 13:59:43.144477] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.317 [2024-11-04 13:59:43.144491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:56.317 [2024-11-04 13:59:43.144517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:56.317 [2024-11-04 13:59:43.144529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:56.317 [2024-11-04 13:59:43.144540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:56.317 [2024-11-04 13:59:43.144553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:56.317 [2024-11-04 13:59:43.144565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:56.317 [2024-11-04 13:59:43.144578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:56.317 [2024-11-04 13:59:43.144589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:56.317 [2024-11-04 13:59:43.144614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:56.317 [2024-11-04 13:59:43.144626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:56.317 [2024-11-04 13:59:43.144686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.317 [2024-11-04 13:59:43.144700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.317 [2024-11-04 13:59:43.144727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.317 [2024-11-04 13:59:43.144739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.317 [2024-11-04 13:59:43.144751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.317 [2024-11-04 13:59:43.144765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.317 [2024-11-04 13:59:43.144777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.317 [2024-11-04 13:59:43.144794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:25:56.317 [2024-11-04 13:59:43.144805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.317 [2024-11-04 13:59:43.185881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.317 [2024-11-04 13:59:43.185960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.317 [2024-11-04 13:59:43.185979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.992 ms 00:25:56.317 [2024-11-04 13:59:43.185991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.317 [2024-11-04 13:59:43.186180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.317 [2024-11-04 13:59:43.186200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.317 [2024-11-04 13:59:43.186213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:56.317 [2024-11-04 13:59:43.186224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.245756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.575 [2024-11-04 13:59:43.245823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.575 [2024-11-04 13:59:43.245842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.503 ms 00:25:56.575 [2024-11-04 13:59:43.245876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.246049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.575 [2024-11-04 13:59:43.246064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.575 [2024-11-04 13:59:43.246077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.575 [2024-11-04 13:59:43.246088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.246550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.575 [2024-11-04 13:59:43.246565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.575 [2024-11-04 13:59:43.246577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:25:56.575 [2024-11-04 13:59:43.246614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.246749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.575 [2024-11-04 13:59:43.246765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.575 [2024-11-04 13:59:43.246776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:56.575 [2024-11-04 13:59:43.246787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.267090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.575 [2024-11-04 13:59:43.267154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.575 [2024-11-04 13:59:43.267172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.274 ms 00:25:56.575 [2024-11-04 13:59:43.267183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.575 [2024-11-04 13:59:43.288162] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:56.576 [2024-11-04 13:59:43.288253] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:56.576 [2024-11-04 13:59:43.288273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.288286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:56.576 [2024-11-04 13:59:43.288302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.917 ms 00:25:56.576 [2024-11-04 13:59:43.288312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.322482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.322614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.576 [2024-11-04 13:59:43.322633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.007 ms 00:25:56.576 [2024-11-04 13:59:43.322645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.343742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.343827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.576 [2024-11-04 13:59:43.343850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.950 ms 00:25:56.576 [2024-11-04 13:59:43.343865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.362888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.362960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.576 [2024-11-04 13:59:43.362982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.859 ms 00:25:56.576 [2024-11-04 13:59:43.362998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.364033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.364088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.576 [2024-11-04 13:59:43.364107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:25:56.576 [2024-11-04 13:59:43.364123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.457055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.457338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.576 [2024-11-04 13:59:43.457385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.879 ms 00:25:56.576 [2024-11-04 13:59:43.457399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.470483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:56.576 [2024-11-04 13:59:43.488084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.488150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.576 [2024-11-04 13:59:43.488168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.429 ms 00:25:56.576 [2024-11-04 13:59:43.488180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.488348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.488369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.576 [2024-11-04 13:59:43.488385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:56.576 [2024-11-04 13:59:43.488400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.488466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.488481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.576 [2024-11-04 13:59:43.488497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:56.576 [2024-11-04 13:59:43.488511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.488548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.488568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.576 [2024-11-04 13:59:43.488603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.576 [2024-11-04 13:59:43.488619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.576 [2024-11-04 13:59:43.488664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.576 [2024-11-04 13:59:43.488681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.576 [2024-11-04 13:59:43.488695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.576 [2024-11-04 13:59:43.488709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:56.576 [2024-11-04 13:59:43.488724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-04 13:59:43.527167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-04 13:59:43.527225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.834 [2024-11-04 13:59:43.527242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.414 ms 00:25:56.834 [2024-11-04 13:59:43.527253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-04 13:59:43.527395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-04 13:59:43.527410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.834 [2024-11-04 13:59:43.527422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:56.835 [2024-11-04 13:59:43.527433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-04 13:59:43.528481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.835 [2024-11-04 13:59:43.533536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 427.951 ms, result 0 00:25:56.835 [2024-11-04 13:59:43.534399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.835 [2024-11-04 13:59:43.553238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.835  [2024-11-04T13:59:43.757Z] Copying: 4096/4096 [kB] (average 32 MBps)[2024-11-04 13:59:43.683340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.835 [2024-11-04 13:59:43.700500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-04 13:59:43.700627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.835 [2024-11-04 13:59:43.700648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:56.835 [2024-11-04 13:59:43.700671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-04 13:59:43.700702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:56.835 [2024-11-04 13:59:43.705370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-04 13:59:43.705417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.835 [2024-11-04 13:59:43.705434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:25:56.835 [2024-11-04 13:59:43.705447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-04 13:59:43.707501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-04 13:59:43.707548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.835 [2024-11-04 13:59:43.707576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:25:56.835 [2024-11-04 13:59:43.707590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-04 13:59:43.711268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-04 13:59:43.711311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.835 [2024-11-04 13:59:43.711324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:25:56.835 [2024-11-04 13:59:43.711335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-04 13:59:43.717664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-04 13:59:43.717702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.835 [2024-11-04 13:59:43.717717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:25:56.835 [2024-11-04 13:59:43.717729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.758220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.758281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.094 [2024-11-04 13:59:43.758298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.429 ms 00:25:57.094 [2024-11-04 13:59:43.758326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.781179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.781420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.094 [2024-11-04 13:59:43.781455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.742 ms 00:25:57.094 [2024-11-04 13:59:43.781468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.781653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.781669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.094 [2024-11-04 13:59:43.781683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:25:57.094 [2024-11-04 13:59:43.781695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.824975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.825036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.094 [2024-11-04 13:59:43.825055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.242 ms 00:25:57.094 [2024-11-04 13:59:43.825067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.864931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.864987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.094 [2024-11-04 13:59:43.865006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.768 ms 00:25:57.094 [2024-11-04 13:59:43.865017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.904510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.904594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.094 [2024-11-04 13:59:43.904612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.379 ms 00:25:57.094 [2024-11-04 13:59:43.904623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.943521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-04 13:59:43.943602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.094 [2024-11-04 13:59:43.943619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.784 ms 00:25:57.094 [2024-11-04 13:59:43.943631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-04 13:59:43.943732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.094 [2024-11-04 13:59:43.943753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:57.094 [2024-11-04 13:59:43.943766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.094 [2024-11-04 13:59:43.943777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.943994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.095 [2024-11-04 13:59:43.944591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.944994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.945007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.945019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.945033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.945045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.096 [2024-11-04 13:59:43.945067] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.096 [2024-11-04 13:59:43.945080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:25:57.096 [2024-11-04 13:59:43.945093] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:57.096 [2024-11-04 13:59:43.945105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.096 [2024-11-04 13:59:43.945116] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.096 [2024-11-04 13:59:43.945129] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.096 [2024-11-04 13:59:43.945140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.096 [2024-11-04 13:59:43.945153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.096 [2024-11-04 13:59:43.945164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.096 [2024-11-04 13:59:43.945175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.096 [2024-11-04 13:59:43.945186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.096 [2024-11-04 13:59:43.945198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.096 [2024-11-04 13:59:43.945216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.096 [2024-11-04 13:59:43.945230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:25:57.096 [2024-11-04 13:59:43.945242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.096 [2024-11-04 13:59:43.967224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.096 [2024-11-04 13:59:43.967285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.096 [2024-11-04 13:59:43.967303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.946 ms 00:25:57.096 [2024-11-04 13:59:43.967315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.096 [2024-11-04 13:59:43.967980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.096 [2024-11-04 13:59:43.968001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.096 [2024-11-04 13:59:43.968014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:25:57.096 [2024-11-04 13:59:43.968026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.026435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.026503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.355 [2024-11-04 13:59:44.026521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.026532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.026671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.026684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.355 [2024-11-04 13:59:44.026696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.026706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.026765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.026779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.355 [2024-11-04 13:59:44.026790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.026800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.026819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.026835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.355 [2024-11-04 13:59:44.026846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.026856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.161471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.161782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.355 [2024-11-04 13:59:44.161810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.161822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.271492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.271617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.355 [2024-11-04 13:59:44.271636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.271649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.271767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.271783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.355 [2024-11-04 13:59:44.271796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.271807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.271841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.271854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.355 [2024-11-04 13:59:44.271878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.271890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.272043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.272059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.355 [2024-11-04 13:59:44.272071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.272083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.272126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.272141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.355 [2024-11-04 13:59:44.272153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.272170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.272216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.272235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.355 [2024-11-04 13:59:44.272257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.272277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.272337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.355 [2024-11-04 13:59:44.272353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.355 [2024-11-04 13:59:44.272373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.355 [2024-11-04 13:59:44.272389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.355 [2024-11-04 13:59:44.272553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.057 ms, result 0 00:25:58.731 00:25:58.731 00:25:58.731 13:59:45 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77040 00:25:58.731 13:59:45 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:58.731 13:59:45 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77040 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 77040 ']' 00:25:58.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.731 13:59:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:58.731 [2024-11-04 13:59:45.541406] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:25:58.731 [2024-11-04 13:59:45.541885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77040 ] 00:25:58.989 [2024-11-04 13:59:45.728306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.989 [2024-11-04 13:59:45.849647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.945 13:59:46 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:59.945 13:59:46 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:25:59.945 13:59:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:00.203 [2024-11-04 13:59:47.044877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.203 [2024-11-04 13:59:47.044968] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.463 [2024-11-04 13:59:47.232296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.232361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.463 [2024-11-04 13:59:47.232387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:00.463 [2024-11-04 13:59:47.232401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.236857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.236930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.463 [2024-11-04 13:59:47.236951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:26:00.463 [2024-11-04 13:59:47.236964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.237251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.463 [2024-11-04 13:59:47.238331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.463 [2024-11-04 13:59:47.238368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.238381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.463 [2024-11-04 13:59:47.238395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:26:00.463 [2024-11-04 13:59:47.238405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.240076] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:00.463 [2024-11-04 13:59:47.261993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.262090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:00.463 [2024-11-04 13:59:47.262110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.919 ms 00:26:00.463 [2024-11-04 13:59:47.262126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.262319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.262340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:00.463 [2024-11-04 13:59:47.262353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:00.463 [2024-11-04 13:59:47.262369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.270253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.270322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.463 [2024-11-04 13:59:47.270337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.815 ms 00:26:00.463 [2024-11-04 13:59:47.270354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.270529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.270552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.463 [2024-11-04 13:59:47.270563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:26:00.463 [2024-11-04 13:59:47.270607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.270654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.270671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.463 [2024-11-04 13:59:47.270682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:00.463 [2024-11-04 13:59:47.270697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.270727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:00.463 [2024-11-04 13:59:47.276086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.276328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.463 [2024-11-04 13:59:47.276365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:26:00.463 [2024-11-04 13:59:47.276377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.276504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.276518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.463 [2024-11-04 13:59:47.276534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:00.463 [2024-11-04 13:59:47.276551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.276601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:00.463 [2024-11-04 13:59:47.276631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:00.463 [2024-11-04 13:59:47.276685] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:00.463 [2024-11-04 13:59:47.276707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:00.463 [2024-11-04 13:59:47.276811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.463 [2024-11-04 13:59:47.276825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.463 [2024-11-04 13:59:47.276857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.463 [2024-11-04 13:59:47.276894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.463 [2024-11-04 13:59:47.276913] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.463 [2024-11-04 13:59:47.276926] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:00.463 [2024-11-04 13:59:47.276943] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.463 [2024-11-04 13:59:47.276954] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.463 [2024-11-04 13:59:47.276975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.463 [2024-11-04 13:59:47.276988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.277005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.463 [2024-11-04 13:59:47.277017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:26:00.463 [2024-11-04 13:59:47.277033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.277123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.463 [2024-11-04 13:59:47.277141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.463 [2024-11-04 13:59:47.277153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:00.463 [2024-11-04 13:59:47.277169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.463 [2024-11-04 13:59:47.277282] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.463 [2024-11-04 13:59:47.277304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.463 [2024-11-04 13:59:47.277317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.463 [2024-11-04 13:59:47.277334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.463 [2024-11-04 13:59:47.277362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:00.463 [2024-11-04 13:59:47.277394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.463 [2024-11-04 13:59:47.277407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.463 [2024-11-04 13:59:47.277434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.463 [2024-11-04 13:59:47.277449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:00.463 [2024-11-04 13:59:47.277459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.463 [2024-11-04 13:59:47.277475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.463 [2024-11-04 13:59:47.277486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:00.463 [2024-11-04 13:59:47.277501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.463 [2024-11-04 13:59:47.277528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:00.463 [2024-11-04 13:59:47.277540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.463 [2024-11-04 13:59:47.277591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.463 [2024-11-04 13:59:47.277618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.463 [2024-11-04 13:59:47.277639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:00.463 [2024-11-04 13:59:47.277649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.463 [2024-11-04 13:59:47.277665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.463 [2024-11-04 13:59:47.277676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.464 [2024-11-04 13:59:47.277703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.464 [2024-11-04 13:59:47.277719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.464 [2024-11-04 13:59:47.277747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.464 [2024-11-04 13:59:47.277758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.464 [2024-11-04 13:59:47.277784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.464 [2024-11-04 13:59:47.277800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:00.464 [2024-11-04 13:59:47.277810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.464 [2024-11-04 13:59:47.277825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.464 [2024-11-04 13:59:47.277836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:00.464 [2024-11-04 13:59:47.277856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.464 [2024-11-04 13:59:47.277883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:00.464 [2024-11-04 13:59:47.277893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.464 [2024-11-04 13:59:47.277920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.464 [2024-11-04 13:59:47.277943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.464 [2024-11-04 13:59:47.277954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.464 [2024-11-04 13:59:47.277981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.464 [2024-11-04 13:59:47.277991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.464 [2024-11-04 13:59:47.278005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.464 [2024-11-04 13:59:47.278016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.464 [2024-11-04 13:59:47.278030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.464 [2024-11-04 13:59:47.278041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.464 [2024-11-04 13:59:47.278057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.464 [2024-11-04 13:59:47.278070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:00.464 [2024-11-04 13:59:47.278104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:00.464 [2024-11-04 13:59:47.278120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:00.464 [2024-11-04 13:59:47.278131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:00.464 [2024-11-04 13:59:47.278146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:00.464 [2024-11-04 13:59:47.278157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:00.464 [2024-11-04 13:59:47.278172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:00.464 [2024-11-04 13:59:47.278184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:00.464 [2024-11-04 13:59:47.278199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:00.464 [2024-11-04 13:59:47.278209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:00.464 [2024-11-04 13:59:47.278277] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.464 [2024-11-04 13:59:47.278289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.464 [2024-11-04 13:59:47.278322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.464 [2024-11-04 13:59:47.278337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.464 [2024-11-04 13:59:47.278349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.464 [2024-11-04 13:59:47.278365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.278376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.464 [2024-11-04 13:59:47.278391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:26:00.464 [2024-11-04 13:59:47.278402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.322637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.322702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.464 [2024-11-04 13:59:47.322725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.154 ms 00:26:00.464 [2024-11-04 13:59:47.322739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.323013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.323035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:00.464 [2024-11-04 13:59:47.323053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:00.464 [2024-11-04 13:59:47.323065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.377539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.377621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.464 [2024-11-04 13:59:47.377646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.433 ms 00:26:00.464 [2024-11-04 13:59:47.377658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.377793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.377808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.464 [2024-11-04 13:59:47.377826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:00.464 [2024-11-04 13:59:47.377837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.378310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.378325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.464 [2024-11-04 13:59:47.378347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:26:00.464 [2024-11-04 13:59:47.378358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.464 [2024-11-04 13:59:47.378489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.464 [2024-11-04 13:59:47.378503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.464 [2024-11-04 13:59:47.378519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:00.464 [2024-11-04 13:59:47.378529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.403551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.403624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.724 [2024-11-04 13:59:47.403650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.985 ms 00:26:00.724 [2024-11-04 13:59:47.403662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.426243] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:00.724 [2024-11-04 13:59:47.426308] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:00.724 [2024-11-04 13:59:47.426336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.426350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:00.724 [2024-11-04 13:59:47.426371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.499 ms 00:26:00.724 [2024-11-04 13:59:47.426383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.462013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.462250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:00.724 [2024-11-04 13:59:47.462294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.449 ms 00:26:00.724 [2024-11-04 13:59:47.462308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.486313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.486532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:00.724 [2024-11-04 13:59:47.486677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.836 ms 00:26:00.724 [2024-11-04 13:59:47.486787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.509495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.509722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:00.724 [2024-11-04 13:59:47.509853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.519 ms 00:26:00.724 [2024-11-04 13:59:47.509899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.511024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.511168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:00.724 [2024-11-04 13:59:47.511272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:26:00.724 [2024-11-04 13:59:47.511317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.624808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.724 [2024-11-04 13:59:47.625097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:00.724 [2024-11-04 13:59:47.625136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.378 ms 00:26:00.724 [2024-11-04 13:59:47.625151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.724 [2024-11-04 13:59:47.640066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:00.983 [2024-11-04 13:59:47.658430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.658520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:00.983 [2024-11-04 13:59:47.658545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.126 ms 00:26:00.983 [2024-11-04 13:59:47.658562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.658753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.658774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:00.983 [2024-11-04 13:59:47.658787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:00.983 [2024-11-04 13:59:47.658804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.658889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.658917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:00.983 [2024-11-04 13:59:47.658929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:00.983 [2024-11-04 13:59:47.658946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.658981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.658998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:00.983 [2024-11-04 13:59:47.659010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:00.983 [2024-11-04 13:59:47.659029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.659073] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:00.983 [2024-11-04 13:59:47.659099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.659111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:00.983 [2024-11-04 13:59:47.659134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:00.983 [2024-11-04 13:59:47.659145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.702125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.702370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:00.983 [2024-11-04 13:59:47.702412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.929 ms 00:26:00.983 [2024-11-04 13:59:47.702426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.702638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.983 [2024-11-04 13:59:47.702657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:00.983 [2024-11-04 13:59:47.702677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:00.983 [2024-11-04 13:59:47.702696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.983 [2024-11-04 13:59:47.703807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.983 [2024-11-04 13:59:47.709600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 471.144 ms, result 0 00:26:00.983 [2024-11-04 13:59:47.710765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:00.983 Some configs were skipped because the RPC state that can call them passed over. 00:26:00.983 13:59:47 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:01.242 [2024-11-04 13:59:48.065486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.242 [2024-11-04 13:59:48.065592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:01.242 [2024-11-04 13:59:48.065613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:26:01.242 [2024-11-04 13:59:48.065632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.242 [2024-11-04 13:59:48.065682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.687 ms, result 0 00:26:01.242 true 00:26:01.242 13:59:48 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:01.501 [2024-11-04 13:59:48.385382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.501 [2024-11-04 13:59:48.385464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:01.501 [2024-11-04 13:59:48.385506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:26:01.501 [2024-11-04 13:59:48.385520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.501 [2024-11-04 13:59:48.385596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.272 ms, result 0 00:26:01.501 true 00:26:01.501 13:59:48 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77040 00:26:01.501 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 77040 ']' 00:26:01.501 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 77040 00:26:01.501 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77040 00:26:01.759 killing process with pid 77040 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77040' 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 77040 00:26:01.759 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 77040 00:26:03.137 [2024-11-04 13:59:49.671125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.137 [2024-11-04 13:59:49.671194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:03.137 [2024-11-04 13:59:49.671210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:03.137 [2024-11-04 13:59:49.671223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.137 [2024-11-04 13:59:49.671248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:03.137 [2024-11-04 13:59:49.675868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.137 [2024-11-04 13:59:49.675906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:03.137 [2024-11-04 13:59:49.675924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:26:03.137 [2024-11-04 13:59:49.675935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.137 [2024-11-04 13:59:49.676187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.137 [2024-11-04 13:59:49.676202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:03.137 [2024-11-04 13:59:49.676216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:26:03.137 [2024-11-04 13:59:49.676226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.137 [2024-11-04 13:59:49.679923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.137 [2024-11-04 13:59:49.679961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:03.137 [2024-11-04 13:59:49.679982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:26:03.137 [2024-11-04 13:59:49.679993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.137 [2024-11-04 13:59:49.686180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.686356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:03.138 [2024-11-04 13:59:49.686386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:26:03.138 [2024-11-04 13:59:49.686398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.703204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.703244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:03.138 [2024-11-04 13:59:49.703265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.732 ms 00:26:03.138 [2024-11-04 13:59:49.703288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.714769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.714814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:03.138 [2024-11-04 13:59:49.714836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.402 ms 00:26:03.138 [2024-11-04 13:59:49.714847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.715002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.715016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:03.138 [2024-11-04 13:59:49.715030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:03.138 [2024-11-04 13:59:49.715041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.731497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.731559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:03.138 [2024-11-04 13:59:49.731593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.430 ms 00:26:03.138 [2024-11-04 13:59:49.731604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.748176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.748365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:03.138 [2024-11-04 13:59:49.748406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:26:03.138 [2024-11-04 13:59:49.748418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.764892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.764934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:03.138 [2024-11-04 13:59:49.764961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.338 ms 00:26:03.138 [2024-11-04 13:59:49.764973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.781449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.138 [2024-11-04 13:59:49.781642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:03.138 [2024-11-04 13:59:49.781677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.359 ms 00:26:03.138 [2024-11-04 13:59:49.781690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.138 [2024-11-04 13:59:49.781756] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:03.138 [2024-11-04 13:59:49.781777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.781995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:03.138 [2024-11-04 13:59:49.782757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.782995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:03.139 [2024-11-04 13:59:49.783310] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:03.139 [2024-11-04 13:59:49.783338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:26:03.139 [2024-11-04 13:59:49.783363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:03.139 [2024-11-04 13:59:49.783385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:03.139 [2024-11-04 13:59:49.783396] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:03.139 [2024-11-04 13:59:49.783411] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:03.139 [2024-11-04 13:59:49.783421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:03.139 [2024-11-04 13:59:49.783436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:03.139 [2024-11-04 13:59:49.783447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:03.139 [2024-11-04 13:59:49.783460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:03.139 [2024-11-04 13:59:49.783470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:03.139 [2024-11-04 13:59:49.783484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.139 [2024-11-04 13:59:49.783495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:03.139 [2024-11-04 13:59:49.783511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:26:03.139 [2024-11-04 13:59:49.783521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.805689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.139 [2024-11-04 13:59:49.805736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.139 [2024-11-04 13:59:49.805764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.128 ms 00:26:03.139 [2024-11-04 13:59:49.805776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.806494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.139 [2024-11-04 13:59:49.806518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.139 [2024-11-04 13:59:49.806536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:26:03.139 [2024-11-04 13:59:49.806554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.883654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.139 [2024-11-04 13:59:49.883710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.139 [2024-11-04 13:59:49.883730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.139 [2024-11-04 13:59:49.883744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.883905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.139 [2024-11-04 13:59:49.883920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.139 [2024-11-04 13:59:49.883936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.139 [2024-11-04 13:59:49.883952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.884029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.139 [2024-11-04 13:59:49.884043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.139 [2024-11-04 13:59:49.884061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.139 [2024-11-04 13:59:49.884072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:49.884095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.139 [2024-11-04 13:59:49.884107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.139 [2024-11-04 13:59:49.884121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.139 [2024-11-04 13:59:49.884132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.139 [2024-11-04 13:59:50.023919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.139 [2024-11-04 13:59:50.023978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.139 [2024-11-04 13:59:50.023998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.139 [2024-11-04 13:59:50.024010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.399 [2024-11-04 13:59:50.140129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.399 [2024-11-04 13:59:50.140327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.399 [2024-11-04 13:59:50.140415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.399 [2024-11-04 13:59:50.140625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.399 [2024-11-04 13:59:50.140719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.399 [2024-11-04 13:59:50.140811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.140890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.399 [2024-11-04 13:59:50.140904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.399 [2024-11-04 13:59:50.140920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.399 [2024-11-04 13:59:50.140932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.399 [2024-11-04 13:59:50.141088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.930 ms, result 0 00:26:04.773 13:59:51 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.773 [2024-11-04 13:59:51.450141] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:26:04.773 [2024-11-04 13:59:51.450337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77121 ] 00:26:04.773 [2024-11-04 13:59:51.654049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.031 [2024-11-04 13:59:51.825353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.600 [2024-11-04 13:59:52.242476] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.600 [2024-11-04 13:59:52.242770] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.600 [2024-11-04 13:59:52.408818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.409362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.600 [2024-11-04 13:59:52.409615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.600 [2024-11-04 13:59:52.409837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.413630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.413853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.600 [2024-11-04 13:59:52.414038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:26:05.600 [2024-11-04 13:59:52.414107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.414331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.600 [2024-11-04 13:59:52.415483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.600 [2024-11-04 13:59:52.415688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.415743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.600 [2024-11-04 13:59:52.415884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:26:05.600 [2024-11-04 13:59:52.415971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.417710] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:05.600 [2024-11-04 13:59:52.439455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.439738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:05.600 [2024-11-04 13:59:52.439819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.744 ms 00:26:05.600 [2024-11-04 13:59:52.439879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.440041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.440225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:05.600 [2024-11-04 13:59:52.440322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:05.600 [2024-11-04 13:59:52.440381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.447824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.447952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.600 [2024-11-04 13:59:52.448024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.207 ms 00:26:05.600 [2024-11-04 13:59:52.448082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.600 [2024-11-04 13:59:52.448368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.600 [2024-11-04 13:59:52.448457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.600 [2024-11-04 13:59:52.448525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:05.601 [2024-11-04 13:59:52.448612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.448696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.601 [2024-11-04 13:59:52.448877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.601 [2024-11-04 13:59:52.448982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:05.601 [2024-11-04 13:59:52.449050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.449133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:05.601 [2024-11-04 13:59:52.454617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.601 [2024-11-04 13:59:52.454760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.601 [2024-11-04 13:59:52.454799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.494 ms 00:26:05.601 [2024-11-04 13:59:52.454811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.454900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.601 [2024-11-04 13:59:52.454915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.601 [2024-11-04 13:59:52.454928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:05.601 [2024-11-04 13:59:52.454939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.454968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:05.601 [2024-11-04 13:59:52.455000] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:05.601 [2024-11-04 13:59:52.455042] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:05.601 [2024-11-04 13:59:52.455064] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:05.601 [2024-11-04 13:59:52.455169] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.601 [2024-11-04 13:59:52.455185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.601 [2024-11-04 13:59:52.455201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.601 [2024-11-04 13:59:52.455216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455235] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:05.601 [2024-11-04 13:59:52.455260] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.601 [2024-11-04 13:59:52.455272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.601 [2024-11-04 13:59:52.455283] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.601 [2024-11-04 13:59:52.455296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.601 [2024-11-04 13:59:52.455308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.601 [2024-11-04 13:59:52.455320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:26:05.601 [2024-11-04 13:59:52.455331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.455422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.601 [2024-11-04 13:59:52.455435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.601 [2024-11-04 13:59:52.455451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:05.601 [2024-11-04 13:59:52.455474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.601 [2024-11-04 13:59:52.455573] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.601 [2024-11-04 13:59:52.455602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.601 [2024-11-04 13:59:52.455637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.601 [2024-11-04 13:59:52.455673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.601 [2024-11-04 13:59:52.455708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.601 [2024-11-04 13:59:52.455730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.601 [2024-11-04 13:59:52.455741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:05.601 [2024-11-04 13:59:52.455754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.601 [2024-11-04 13:59:52.455777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.601 [2024-11-04 13:59:52.455788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:05.601 [2024-11-04 13:59:52.455799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.601 [2024-11-04 13:59:52.455822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.601 [2024-11-04 13:59:52.455853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.601 [2024-11-04 13:59:52.455886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.601 [2024-11-04 13:59:52.455918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.601 [2024-11-04 13:59:52.455949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.601 [2024-11-04 13:59:52.455970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.601 [2024-11-04 13:59:52.455980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:05.601 [2024-11-04 13:59:52.455991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.601 [2024-11-04 13:59:52.456001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.601 [2024-11-04 13:59:52.456011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:05.601 [2024-11-04 13:59:52.456022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.601 [2024-11-04 13:59:52.456032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.601 [2024-11-04 13:59:52.456044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:05.601 [2024-11-04 13:59:52.456054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.456065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.601 [2024-11-04 13:59:52.456075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:05.601 [2024-11-04 13:59:52.456086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.456096] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.601 [2024-11-04 13:59:52.456108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.601 [2024-11-04 13:59:52.456119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.601 [2024-11-04 13:59:52.456135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.601 [2024-11-04 13:59:52.456146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.601 [2024-11-04 13:59:52.456158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.601 [2024-11-04 13:59:52.456168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.601 [2024-11-04 13:59:52.456179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.601 [2024-11-04 13:59:52.456190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.601 [2024-11-04 13:59:52.456201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.601 [2024-11-04 13:59:52.456213] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.601 [2024-11-04 13:59:52.456227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.601 [2024-11-04 13:59:52.456240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:05.602 [2024-11-04 13:59:52.456252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:05.602 [2024-11-04 13:59:52.456264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:05.602 [2024-11-04 13:59:52.456276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:05.602 [2024-11-04 13:59:52.456289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:05.602 [2024-11-04 13:59:52.456301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:05.602 [2024-11-04 13:59:52.456312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:05.602 [2024-11-04 13:59:52.456324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:05.602 [2024-11-04 13:59:52.456336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:05.602 [2024-11-04 13:59:52.456348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:05.602 [2024-11-04 13:59:52.456407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.602 [2024-11-04 13:59:52.456424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.602 [2024-11-04 13:59:52.456449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.602 [2024-11-04 13:59:52.456461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.602 [2024-11-04 13:59:52.456474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.602 [2024-11-04 13:59:52.456486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.602 [2024-11-04 13:59:52.456498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.602 [2024-11-04 13:59:52.456513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:26:05.602 [2024-11-04 13:59:52.456525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.602 [2024-11-04 13:59:52.502696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.602 [2024-11-04 13:59:52.502766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.602 [2024-11-04 13:59:52.502785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.090 ms 00:26:05.602 [2024-11-04 13:59:52.502798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.602 [2024-11-04 13:59:52.503022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.602 [2024-11-04 13:59:52.503038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.602 [2024-11-04 13:59:52.503053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:05.602 [2024-11-04 13:59:52.503064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.566539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.566796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.861 [2024-11-04 13:59:52.566848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.443 ms 00:26:05.861 [2024-11-04 13:59:52.566861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.566992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.567012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.861 [2024-11-04 13:59:52.567025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.861 [2024-11-04 13:59:52.567037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.567499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.567515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.861 [2024-11-04 13:59:52.567528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:26:05.861 [2024-11-04 13:59:52.567547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.567697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.567714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.861 [2024-11-04 13:59:52.567727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:26:05.861 [2024-11-04 13:59:52.567739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.589583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.589639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.861 [2024-11-04 13:59:52.589670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.814 ms 00:26:05.861 [2024-11-04 13:59:52.589682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.611385] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:05.861 [2024-11-04 13:59:52.611436] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.861 [2024-11-04 13:59:52.611456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.611469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.861 [2024-11-04 13:59:52.611483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.578 ms 00:26:05.861 [2024-11-04 13:59:52.611494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.646694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.646792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.861 [2024-11-04 13:59:52.646812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.042 ms 00:26:05.861 [2024-11-04 13:59:52.646825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.669678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.669742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.861 [2024-11-04 13:59:52.669760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.709 ms 00:26:05.861 [2024-11-04 13:59:52.669772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.692155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.692390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.861 [2024-11-04 13:59:52.692419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.262 ms 00:26:05.861 [2024-11-04 13:59:52.692432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.861 [2024-11-04 13:59:52.693417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.861 [2024-11-04 13:59:52.693454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.861 [2024-11-04 13:59:52.693469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:26:05.861 [2024-11-04 13:59:52.693481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.798154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.798423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:06.120 [2024-11-04 13:59:52.798453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.635 ms 00:26:06.120 [2024-11-04 13:59:52.798468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.813722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:06.120 [2024-11-04 13:59:52.832557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.832640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:06.120 [2024-11-04 13:59:52.832658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.906 ms 00:26:06.120 [2024-11-04 13:59:52.832672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.832857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.832875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:06.120 [2024-11-04 13:59:52.832889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:06.120 [2024-11-04 13:59:52.832901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.832969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.832983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:06.120 [2024-11-04 13:59:52.832997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:06.120 [2024-11-04 13:59:52.833009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.833044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.833062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:06.120 [2024-11-04 13:59:52.833088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:06.120 [2024-11-04 13:59:52.833100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.833144] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:06.120 [2024-11-04 13:59:52.833160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.833172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:06.120 [2024-11-04 13:59:52.833184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:06.120 [2024-11-04 13:59:52.833196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.879668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.879749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:06.120 [2024-11-04 13:59:52.879769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.441 ms 00:26:06.120 [2024-11-04 13:59:52.879782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.120 [2024-11-04 13:59:52.879987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.120 [2024-11-04 13:59:52.880005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:06.120 [2024-11-04 13:59:52.880019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:06.120 [2024-11-04 13:59:52.880032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.121 [2024-11-04 13:59:52.881292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:06.121 [2024-11-04 13:59:52.887371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.117 ms, result 0 00:26:06.121 [2024-11-04 13:59:52.888332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.121 [2024-11-04 13:59:52.910142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:07.495  [2024-11-04T13:59:55.006Z] Copying: 33/256 [MB] (33 MBps) [2024-11-04T13:59:56.397Z] Copying: 60/256 [MB] (26 MBps) [2024-11-04T13:59:57.330Z] Copying: 83/256 [MB] (23 MBps) [2024-11-04T13:59:58.266Z] Copying: 109/256 [MB] (25 MBps) [2024-11-04T13:59:59.202Z] Copying: 137/256 [MB] (28 MBps) [2024-11-04T14:00:00.142Z] Copying: 165/256 [MB] (27 MBps) [2024-11-04T14:00:01.078Z] Copying: 199/256 [MB] (33 MBps) [2024-11-04T14:00:02.012Z] Copying: 229/256 [MB] (29 MBps) [2024-11-04T14:00:02.271Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-04 14:00:02.169135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:15.349 [2024-11-04 14:00:02.195332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.349 [2024-11-04 14:00:02.195414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:15.349 [2024-11-04 14:00:02.195438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:15.349 [2024-11-04 14:00:02.195470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.349 [2024-11-04 14:00:02.195511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:15.349 [2024-11-04 14:00:02.203147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.349 [2024-11-04 14:00:02.203210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:15.349 [2024-11-04 14:00:02.203231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.608 ms 00:26:15.349 [2024-11-04 14:00:02.203247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.349 [2024-11-04 14:00:02.203655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.349 [2024-11-04 14:00:02.203683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:15.349 [2024-11-04 14:00:02.203701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:26:15.349 [2024-11-04 14:00:02.203717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.349 [2024-11-04 14:00:02.208466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.349 [2024-11-04 14:00:02.208537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:15.349 [2024-11-04 14:00:02.208556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:26:15.349 [2024-11-04 14:00:02.208591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.349 [2024-11-04 14:00:02.218005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.349 [2024-11-04 14:00:02.218243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:15.349 [2024-11-04 14:00:02.218277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.343 ms 00:26:15.349 [2024-11-04 14:00:02.218294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.281231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.281332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:15.607 [2024-11-04 14:00:02.281358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.841 ms 00:26:15.607 [2024-11-04 14:00:02.281375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.313991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.314112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:15.607 [2024-11-04 14:00:02.314138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.447 ms 00:26:15.607 [2024-11-04 14:00:02.314163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.314425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.314451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:15.607 [2024-11-04 14:00:02.314469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:26:15.607 [2024-11-04 14:00:02.314485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.376191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.376283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:15.607 [2024-11-04 14:00:02.376307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.652 ms 00:26:15.607 [2024-11-04 14:00:02.376323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.437093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.437181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:15.607 [2024-11-04 14:00:02.437205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.634 ms 00:26:15.607 [2024-11-04 14:00:02.437221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.607 [2024-11-04 14:00:02.498240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.607 [2024-11-04 14:00:02.498591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:15.607 [2024-11-04 14:00:02.498629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.882 ms 00:26:15.607 [2024-11-04 14:00:02.498646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.867 [2024-11-04 14:00:02.559665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.867 [2024-11-04 14:00:02.559748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:15.867 [2024-11-04 14:00:02.559782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.816 ms 00:26:15.867 [2024-11-04 14:00:02.559799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.867 [2024-11-04 14:00:02.559922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:15.867 [2024-11-04 14:00:02.559951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.559971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.559990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:15.867 [2024-11-04 14:00:02.560279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.560985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:15.868 [2024-11-04 14:00:02.561734] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:15.868 [2024-11-04 14:00:02.561750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78bfc9c3-5954-4389-a76d-b2e42aa87556 00:26:15.868 [2024-11-04 14:00:02.561769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:15.868 [2024-11-04 14:00:02.561784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:15.868 [2024-11-04 14:00:02.561800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:15.868 [2024-11-04 14:00:02.561816] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:15.868 [2024-11-04 14:00:02.561831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:15.868 [2024-11-04 14:00:02.561847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:15.868 [2024-11-04 14:00:02.561863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:15.869 [2024-11-04 14:00:02.561878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:15.869 [2024-11-04 14:00:02.561892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:15.869 [2024-11-04 14:00:02.561907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.869 [2024-11-04 14:00:02.561930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:15.869 [2024-11-04 14:00:02.561947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:26:15.869 [2024-11-04 14:00:02.561963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.594741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.869 [2024-11-04 14:00:02.594818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.869 [2024-11-04 14:00:02.594842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.738 ms 00:26:15.869 [2024-11-04 14:00:02.594859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.595892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.869 [2024-11-04 14:00:02.595923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.869 [2024-11-04 14:00:02.595940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:26:15.869 [2024-11-04 14:00:02.595957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.663347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.869 [2024-11-04 14:00:02.663688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.869 [2024-11-04 14:00:02.663718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.869 [2024-11-04 14:00:02.663731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.663892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.869 [2024-11-04 14:00:02.663907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.869 [2024-11-04 14:00:02.663920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.869 [2024-11-04 14:00:02.663932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.663997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.869 [2024-11-04 14:00:02.664013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.869 [2024-11-04 14:00:02.664025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.869 [2024-11-04 14:00:02.664036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.869 [2024-11-04 14:00:02.664059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.869 [2024-11-04 14:00:02.664077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.869 [2024-11-04 14:00:02.664088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.869 [2024-11-04 14:00:02.664100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.126 [2024-11-04 14:00:02.813192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.126 [2024-11-04 14:00:02.813267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.126 [2024-11-04 14:00:02.813286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.126 [2024-11-04 14:00:02.813299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.126 [2024-11-04 14:00:02.938204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.126 [2024-11-04 14:00:02.938304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.126 [2024-11-04 14:00:02.938321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.126 [2024-11-04 14:00:02.938350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.126 [2024-11-04 14:00:02.938465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.126 [2024-11-04 14:00:02.938480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.126 [2024-11-04 14:00:02.938492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.126 [2024-11-04 14:00:02.938504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.126 [2024-11-04 14:00:02.938538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.126 [2024-11-04 14:00:02.938550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.126 [2024-11-04 14:00:02.938566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.126 [2024-11-04 14:00:02.938578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.126 [2024-11-04 14:00:02.938736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.127 [2024-11-04 14:00:02.938752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.127 [2024-11-04 14:00:02.938766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.127 [2024-11-04 14:00:02.938778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.127 [2024-11-04 14:00:02.938826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.127 [2024-11-04 14:00:02.938840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.127 [2024-11-04 14:00:02.938853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.127 [2024-11-04 14:00:02.938869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.127 [2024-11-04 14:00:02.938915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.127 [2024-11-04 14:00:02.938933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.127 [2024-11-04 14:00:02.938946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.127 [2024-11-04 14:00:02.938957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.127 [2024-11-04 14:00:02.939008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.127 [2024-11-04 14:00:02.939021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.127 [2024-11-04 14:00:02.939037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.127 [2024-11-04 14:00:02.939049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.127 [2024-11-04 14:00:02.939206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 743.901 ms, result 0 00:26:17.508 00:26:17.508 00:26:17.508 14:00:04 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:18.074 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:18.074 14:00:04 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:18.074 14:00:04 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:18.074 14:00:04 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:18.074 14:00:04 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:18.074 14:00:04 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:18.332 14:00:05 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:18.332 14:00:05 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77040 00:26:18.332 14:00:05 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 77040 ']' 00:26:18.332 14:00:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 77040 00:26:18.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77040) - No such process 00:26:18.332 14:00:05 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 77040 is not found' 00:26:18.332 Process with pid 77040 is not found 00:26:18.332 ************************************ 00:26:18.332 END TEST ftl_trim 00:26:18.332 ************************************ 00:26:18.332 00:26:18.332 real 1m12.061s 00:26:18.332 user 1m40.336s 00:26:18.332 sys 0m8.146s 00:26:18.332 14:00:05 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:18.332 14:00:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:18.332 14:00:05 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:18.332 14:00:05 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:18.332 14:00:05 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:18.332 14:00:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:18.332 ************************************ 00:26:18.332 START TEST ftl_restore 00:26:18.333 ************************************ 00:26:18.333 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:18.591 * Looking for test storage... 00:26:18.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.591 14:00:05 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.591 --rc genhtml_branch_coverage=1 00:26:18.591 --rc genhtml_function_coverage=1 00:26:18.591 --rc genhtml_legend=1 00:26:18.591 --rc geninfo_all_blocks=1 00:26:18.591 --rc geninfo_unexecuted_blocks=1 00:26:18.591 00:26:18.591 ' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.591 --rc genhtml_branch_coverage=1 00:26:18.591 --rc genhtml_function_coverage=1 00:26:18.591 --rc genhtml_legend=1 00:26:18.591 --rc geninfo_all_blocks=1 00:26:18.591 --rc geninfo_unexecuted_blocks=1 00:26:18.591 00:26:18.591 ' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.591 --rc genhtml_branch_coverage=1 00:26:18.591 --rc genhtml_function_coverage=1 00:26:18.591 --rc genhtml_legend=1 00:26:18.591 --rc geninfo_all_blocks=1 00:26:18.591 --rc geninfo_unexecuted_blocks=1 00:26:18.591 00:26:18.591 ' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:18.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.591 --rc genhtml_branch_coverage=1 00:26:18.591 --rc genhtml_function_coverage=1 00:26:18.591 --rc genhtml_legend=1 00:26:18.591 --rc geninfo_all_blocks=1 00:26:18.591 --rc geninfo_unexecuted_blocks=1 00:26:18.591 00:26:18.591 ' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qvzssiWP0w 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77325 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77325 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 77325 ']' 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.591 14:00:05 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:18.591 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:18.858 [2024-11-04 14:00:05.513246] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:26:18.858 [2024-11-04 14:00:05.513401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77325 ] 00:26:18.858 [2024-11-04 14:00:05.703213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.122 [2024-11-04 14:00:05.882059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.527 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:20.527 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:20.527 14:00:07 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:20.797 14:00:07 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:20.797 14:00:07 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:20.797 14:00:07 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:20.797 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:26:20.797 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:20.797 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:26:20.797 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:26:20.797 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:21.059 { 00:26:21.059 "name": "nvme0n1", 00:26:21.059 "aliases": [ 00:26:21.059 "e1c2c7cc-8d6a-4242-a5ad-13ecf8728f63" 00:26:21.059 ], 00:26:21.059 "product_name": "NVMe disk", 00:26:21.059 "block_size": 4096, 00:26:21.059 "num_blocks": 1310720, 00:26:21.059 "uuid": "e1c2c7cc-8d6a-4242-a5ad-13ecf8728f63", 00:26:21.059 "numa_id": -1, 00:26:21.059 "assigned_rate_limits": { 00:26:21.059 "rw_ios_per_sec": 0, 00:26:21.059 "rw_mbytes_per_sec": 0, 00:26:21.059 "r_mbytes_per_sec": 0, 00:26:21.059 "w_mbytes_per_sec": 0 00:26:21.059 }, 00:26:21.059 "claimed": true, 00:26:21.059 "claim_type": "read_many_write_one", 00:26:21.059 "zoned": false, 00:26:21.059 "supported_io_types": { 00:26:21.059 "read": true, 00:26:21.059 "write": true, 00:26:21.059 "unmap": true, 00:26:21.059 "flush": true, 00:26:21.059 "reset": true, 00:26:21.059 "nvme_admin": true, 00:26:21.059 "nvme_io": true, 00:26:21.059 "nvme_io_md": false, 00:26:21.059 "write_zeroes": true, 00:26:21.059 "zcopy": false, 00:26:21.059 "get_zone_info": false, 00:26:21.059 "zone_management": false, 00:26:21.059 "zone_append": false, 00:26:21.059 "compare": true, 00:26:21.059 "compare_and_write": false, 00:26:21.059 "abort": true, 00:26:21.059 "seek_hole": false, 00:26:21.059 "seek_data": false, 00:26:21.059 "copy": true, 00:26:21.059 "nvme_iov_md": false 00:26:21.059 }, 00:26:21.059 "driver_specific": { 00:26:21.059 "nvme": [ 00:26:21.059 { 00:26:21.059 "pci_address": "0000:00:11.0", 00:26:21.059 "trid": { 00:26:21.059 "trtype": "PCIe", 00:26:21.059 "traddr": "0000:00:11.0" 00:26:21.059 }, 00:26:21.059 "ctrlr_data": { 00:26:21.059 "cntlid": 0, 00:26:21.059 "vendor_id": "0x1b36", 00:26:21.059 "model_number": "QEMU NVMe Ctrl", 00:26:21.059 "serial_number": "12341", 00:26:21.059 "firmware_revision": "8.0.0", 00:26:21.059 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:21.059 "oacs": { 00:26:21.059 "security": 0, 00:26:21.059 "format": 1, 00:26:21.059 "firmware": 0, 00:26:21.059 "ns_manage": 1 00:26:21.059 }, 00:26:21.059 "multi_ctrlr": false, 00:26:21.059 "ana_reporting": false 00:26:21.059 }, 00:26:21.059 "vs": { 00:26:21.059 "nvme_version": "1.4" 00:26:21.059 }, 00:26:21.059 "ns_data": { 00:26:21.059 "id": 1, 00:26:21.059 "can_share": false 00:26:21.059 } 00:26:21.059 } 00:26:21.059 ], 00:26:21.059 "mp_policy": "active_passive" 00:26:21.059 } 00:26:21.059 } 00:26:21.059 ]' 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:26:21.059 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:26:21.059 14:00:07 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:21.059 14:00:07 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:21.059 14:00:07 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:21.059 14:00:07 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:21.059 14:00:07 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:21.624 14:00:08 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f656c3de-fddc-47a9-a09e-e3c990f8896c 00:26:21.624 14:00:08 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:21.624 14:00:08 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f656c3de-fddc-47a9-a09e-e3c990f8896c 00:26:21.882 14:00:08 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:22.139 14:00:08 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=cf189bba-e971-4311-bb67-bb5a9f0b2cc0 00:26:22.139 14:00:08 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cf189bba-e971-4311-bb67-bb5a9f0b2cc0 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:22.397 14:00:09 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.397 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.397 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:22.397 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:26:22.397 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:26:22.397 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:22.964 { 00:26:22.964 "name": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:22.964 "aliases": [ 00:26:22.964 "lvs/nvme0n1p0" 00:26:22.964 ], 00:26:22.964 "product_name": "Logical Volume", 00:26:22.964 "block_size": 4096, 00:26:22.964 "num_blocks": 26476544, 00:26:22.964 "uuid": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:22.964 "assigned_rate_limits": { 00:26:22.964 "rw_ios_per_sec": 0, 00:26:22.964 "rw_mbytes_per_sec": 0, 00:26:22.964 "r_mbytes_per_sec": 0, 00:26:22.964 "w_mbytes_per_sec": 0 00:26:22.964 }, 00:26:22.964 "claimed": false, 00:26:22.964 "zoned": false, 00:26:22.964 "supported_io_types": { 00:26:22.964 "read": true, 00:26:22.964 "write": true, 00:26:22.964 "unmap": true, 00:26:22.964 "flush": false, 00:26:22.964 "reset": true, 00:26:22.964 "nvme_admin": false, 00:26:22.964 "nvme_io": false, 00:26:22.964 "nvme_io_md": false, 00:26:22.964 "write_zeroes": true, 00:26:22.964 "zcopy": false, 00:26:22.964 "get_zone_info": false, 00:26:22.964 "zone_management": false, 00:26:22.964 "zone_append": false, 00:26:22.964 "compare": false, 00:26:22.964 "compare_and_write": false, 00:26:22.964 "abort": false, 00:26:22.964 "seek_hole": true, 00:26:22.964 "seek_data": true, 00:26:22.964 "copy": false, 00:26:22.964 "nvme_iov_md": false 00:26:22.964 }, 00:26:22.964 "driver_specific": { 00:26:22.964 "lvol": { 00:26:22.964 "lvol_store_uuid": "cf189bba-e971-4311-bb67-bb5a9f0b2cc0", 00:26:22.964 "base_bdev": "nvme0n1", 00:26:22.964 "thin_provision": true, 00:26:22.964 "num_allocated_clusters": 0, 00:26:22.964 "snapshot": false, 00:26:22.964 "clone": false, 00:26:22.964 "esnap_clone": false 00:26:22.964 } 00:26:22.964 } 00:26:22.964 } 00:26:22.964 ]' 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:22.964 14:00:09 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:26:22.964 14:00:09 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:22.964 14:00:09 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:22.964 14:00:09 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:23.221 14:00:10 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:23.221 14:00:10 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:23.221 14:00:10 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:23.221 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:23.221 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:23.221 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:26:23.221 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:26:23.479 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:23.761 { 00:26:23.761 "name": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:23.761 "aliases": [ 00:26:23.761 "lvs/nvme0n1p0" 00:26:23.761 ], 00:26:23.761 "product_name": "Logical Volume", 00:26:23.761 "block_size": 4096, 00:26:23.761 "num_blocks": 26476544, 00:26:23.761 "uuid": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:23.761 "assigned_rate_limits": { 00:26:23.761 "rw_ios_per_sec": 0, 00:26:23.761 "rw_mbytes_per_sec": 0, 00:26:23.761 "r_mbytes_per_sec": 0, 00:26:23.761 "w_mbytes_per_sec": 0 00:26:23.761 }, 00:26:23.761 "claimed": false, 00:26:23.761 "zoned": false, 00:26:23.761 "supported_io_types": { 00:26:23.761 "read": true, 00:26:23.761 "write": true, 00:26:23.761 "unmap": true, 00:26:23.761 "flush": false, 00:26:23.761 "reset": true, 00:26:23.761 "nvme_admin": false, 00:26:23.761 "nvme_io": false, 00:26:23.761 "nvme_io_md": false, 00:26:23.761 "write_zeroes": true, 00:26:23.761 "zcopy": false, 00:26:23.761 "get_zone_info": false, 00:26:23.761 "zone_management": false, 00:26:23.761 "zone_append": false, 00:26:23.761 "compare": false, 00:26:23.761 "compare_and_write": false, 00:26:23.761 "abort": false, 00:26:23.761 "seek_hole": true, 00:26:23.761 "seek_data": true, 00:26:23.761 "copy": false, 00:26:23.761 "nvme_iov_md": false 00:26:23.761 }, 00:26:23.761 "driver_specific": { 00:26:23.761 "lvol": { 00:26:23.761 "lvol_store_uuid": "cf189bba-e971-4311-bb67-bb5a9f0b2cc0", 00:26:23.761 "base_bdev": "nvme0n1", 00:26:23.761 "thin_provision": true, 00:26:23.761 "num_allocated_clusters": 0, 00:26:23.761 "snapshot": false, 00:26:23.761 "clone": false, 00:26:23.761 "esnap_clone": false 00:26:23.761 } 00:26:23.761 } 00:26:23.761 } 00:26:23.761 ]' 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:23.761 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:26:23.761 14:00:10 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:23.761 14:00:10 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:24.020 14:00:10 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:24.020 14:00:10 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:24.020 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:24.020 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:24.020 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:26:24.020 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:26:24.020 14:00:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3795832-7ac7-4ff9-94f6-5a7f2dca0682 00:26:24.279 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:24.279 { 00:26:24.279 "name": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:24.279 "aliases": [ 00:26:24.279 "lvs/nvme0n1p0" 00:26:24.279 ], 00:26:24.279 "product_name": "Logical Volume", 00:26:24.279 "block_size": 4096, 00:26:24.279 "num_blocks": 26476544, 00:26:24.279 "uuid": "e3795832-7ac7-4ff9-94f6-5a7f2dca0682", 00:26:24.279 "assigned_rate_limits": { 00:26:24.279 "rw_ios_per_sec": 0, 00:26:24.279 "rw_mbytes_per_sec": 0, 00:26:24.279 "r_mbytes_per_sec": 0, 00:26:24.279 "w_mbytes_per_sec": 0 00:26:24.279 }, 00:26:24.279 "claimed": false, 00:26:24.279 "zoned": false, 00:26:24.279 "supported_io_types": { 00:26:24.279 "read": true, 00:26:24.279 "write": true, 00:26:24.279 "unmap": true, 00:26:24.279 "flush": false, 00:26:24.279 "reset": true, 00:26:24.279 "nvme_admin": false, 00:26:24.279 "nvme_io": false, 00:26:24.279 "nvme_io_md": false, 00:26:24.279 "write_zeroes": true, 00:26:24.279 "zcopy": false, 00:26:24.279 "get_zone_info": false, 00:26:24.279 "zone_management": false, 00:26:24.279 "zone_append": false, 00:26:24.279 "compare": false, 00:26:24.279 "compare_and_write": false, 00:26:24.279 "abort": false, 00:26:24.279 "seek_hole": true, 00:26:24.279 "seek_data": true, 00:26:24.279 "copy": false, 00:26:24.279 "nvme_iov_md": false 00:26:24.279 }, 00:26:24.279 "driver_specific": { 00:26:24.279 "lvol": { 00:26:24.279 "lvol_store_uuid": "cf189bba-e971-4311-bb67-bb5a9f0b2cc0", 00:26:24.279 "base_bdev": "nvme0n1", 00:26:24.279 "thin_provision": true, 00:26:24.279 "num_allocated_clusters": 0, 00:26:24.279 "snapshot": false, 00:26:24.279 "clone": false, 00:26:24.279 "esnap_clone": false 00:26:24.279 } 00:26:24.279 } 00:26:24.279 } 00:26:24.279 ]' 00:26:24.279 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:24.538 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:26:24.538 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:24.538 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:24.538 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:24.538 14:00:11 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e3795832-7ac7-4ff9-94f6-5a7f2dca0682 --l2p_dram_limit 10' 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:24.538 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:24.538 14:00:11 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e3795832-7ac7-4ff9-94f6-5a7f2dca0682 --l2p_dram_limit 10 -c nvc0n1p0 00:26:24.798 [2024-11-04 14:00:11.570350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.570430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:24.798 [2024-11-04 14:00:11.570476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.798 [2024-11-04 14:00:11.570493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.570606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.570627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:24.798 [2024-11-04 14:00:11.570647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:24.798 [2024-11-04 14:00:11.570662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.570710] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:24.798 [2024-11-04 14:00:11.571896] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:24.798 [2024-11-04 14:00:11.571965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.571984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:24.798 [2024-11-04 14:00:11.572005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:26:24.798 [2024-11-04 14:00:11.572021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.572245] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b4d732ee-44ac-43ed-b057-e9820a577d87 00:26:24.798 [2024-11-04 14:00:11.574218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.574278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:24.798 [2024-11-04 14:00:11.574298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:24.798 [2024-11-04 14:00:11.574317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.583577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.583653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:24.798 [2024-11-04 14:00:11.583675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.150 ms 00:26:24.798 [2024-11-04 14:00:11.583695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.583847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.583874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:24.798 [2024-11-04 14:00:11.583892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:24.798 [2024-11-04 14:00:11.583918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.584021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.584044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:24.798 [2024-11-04 14:00:11.584066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:24.798 [2024-11-04 14:00:11.584086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.584124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.798 [2024-11-04 14:00:11.590312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.590367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:24.798 [2024-11-04 14:00:11.590392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:26:24.798 [2024-11-04 14:00:11.590408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.590464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.798 [2024-11-04 14:00:11.590481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:24.798 [2024-11-04 14:00:11.590501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:24.798 [2024-11-04 14:00:11.590517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.798 [2024-11-04 14:00:11.590600] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:24.799 [2024-11-04 14:00:11.590793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:24.799 [2024-11-04 14:00:11.590825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:24.799 [2024-11-04 14:00:11.590847] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:24.799 [2024-11-04 14:00:11.590872] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:24.799 [2024-11-04 14:00:11.590891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:24.799 [2024-11-04 14:00:11.590913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:24.799 [2024-11-04 14:00:11.590933] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:24.799 [2024-11-04 14:00:11.590952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:24.799 [2024-11-04 14:00:11.590967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:24.799 [2024-11-04 14:00:11.590989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.799 [2024-11-04 14:00:11.591006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:24.799 [2024-11-04 14:00:11.591027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:26:24.799 [2024-11-04 14:00:11.591060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.799 [2024-11-04 14:00:11.591174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.799 [2024-11-04 14:00:11.591192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:24.799 [2024-11-04 14:00:11.591213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:24.799 [2024-11-04 14:00:11.591229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.799 [2024-11-04 14:00:11.591362] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:24.799 [2024-11-04 14:00:11.591395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:24.799 [2024-11-04 14:00:11.591417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:24.799 [2024-11-04 14:00:11.591470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:24.799 [2024-11-04 14:00:11.591524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.799 [2024-11-04 14:00:11.591557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:24.799 [2024-11-04 14:00:11.591614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:24.799 [2024-11-04 14:00:11.591633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.799 [2024-11-04 14:00:11.591648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:24.799 [2024-11-04 14:00:11.591668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:24.799 [2024-11-04 14:00:11.591683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:24.799 [2024-11-04 14:00:11.591725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:24.799 [2024-11-04 14:00:11.591780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:24.799 [2024-11-04 14:00:11.591830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:24.799 [2024-11-04 14:00:11.591880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:24.799 [2024-11-04 14:00:11.591930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:24.799 [2024-11-04 14:00:11.591948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.799 [2024-11-04 14:00:11.591963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:24.799 [2024-11-04 14:00:11.591986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:24.799 [2024-11-04 14:00:11.592001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.799 [2024-11-04 14:00:11.592020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:24.799 [2024-11-04 14:00:11.592036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:24.799 [2024-11-04 14:00:11.592055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.799 [2024-11-04 14:00:11.592070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:24.799 [2024-11-04 14:00:11.592089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:24.799 [2024-11-04 14:00:11.592104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.592123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:24.799 [2024-11-04 14:00:11.592139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:24.799 [2024-11-04 14:00:11.592157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.592171] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:24.799 [2024-11-04 14:00:11.592193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:24.799 [2024-11-04 14:00:11.592209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.799 [2024-11-04 14:00:11.592228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.799 [2024-11-04 14:00:11.592245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:24.799 [2024-11-04 14:00:11.592268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:24.799 [2024-11-04 14:00:11.592283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:24.799 [2024-11-04 14:00:11.592302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:24.799 [2024-11-04 14:00:11.592319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:24.799 [2024-11-04 14:00:11.592339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:24.799 [2024-11-04 14:00:11.592362] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:24.799 [2024-11-04 14:00:11.592391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.799 [2024-11-04 14:00:11.592409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:24.799 [2024-11-04 14:00:11.592430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:24.799 [2024-11-04 14:00:11.592448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:24.799 [2024-11-04 14:00:11.592471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:24.799 [2024-11-04 14:00:11.592488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:24.799 [2024-11-04 14:00:11.592510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:24.799 [2024-11-04 14:00:11.592527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:24.799 [2024-11-04 14:00:11.592549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:24.799 [2024-11-04 14:00:11.592587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:24.799 [2024-11-04 14:00:11.592617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:24.799 [2024-11-04 14:00:11.592637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:24.799 [2024-11-04 14:00:11.592662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:24.799 [2024-11-04 14:00:11.592681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:24.799 [2024-11-04 14:00:11.592704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:24.799 [2024-11-04 14:00:11.592722] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:24.799 [2024-11-04 14:00:11.592745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.800 [2024-11-04 14:00:11.592764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.800 [2024-11-04 14:00:11.592787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:24.800 [2024-11-04 14:00:11.592804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:24.800 [2024-11-04 14:00:11.592827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:24.800 [2024-11-04 14:00:11.592858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.800 [2024-11-04 14:00:11.592882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:24.800 [2024-11-04 14:00:11.592902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:26:24.800 [2024-11-04 14:00:11.592923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.800 [2024-11-04 14:00:11.593002] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:24.800 [2024-11-04 14:00:11.593032] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:28.993 [2024-11-04 14:00:15.895627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.993 [2024-11-04 14:00:15.895749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:28.993 [2024-11-04 14:00:15.895771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4302.583 ms 00:26:28.993 [2024-11-04 14:00:15.895787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.939642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.939723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.252 [2024-11-04 14:00:15.939747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.403 ms 00:26:29.252 [2024-11-04 14:00:15.939769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.940010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.940040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:29.252 [2024-11-04 14:00:15.940056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:26:29.252 [2024-11-04 14:00:15.940091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.990364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.990449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.252 [2024-11-04 14:00:15.990484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.159 ms 00:26:29.252 [2024-11-04 14:00:15.990502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.990596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.990615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.252 [2024-11-04 14:00:15.990637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:29.252 [2024-11-04 14:00:15.990653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.991206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.991241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.252 [2024-11-04 14:00:15.991257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:26:29.252 [2024-11-04 14:00:15.991274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:15.991421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:15.991451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.252 [2024-11-04 14:00:15.991466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:26:29.252 [2024-11-04 14:00:15.991486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:16.014666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:16.014749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.252 [2024-11-04 14:00:16.014771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.149 ms 00:26:29.252 [2024-11-04 14:00:16.014805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:16.028503] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:29.252 [2024-11-04 14:00:16.032003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:16.032044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:29.252 [2024-11-04 14:00:16.032064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.995 ms 00:26:29.252 [2024-11-04 14:00:16.032093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:16.160124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:16.160220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:29.252 [2024-11-04 14:00:16.160250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.963 ms 00:26:29.252 [2024-11-04 14:00:16.160269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.252 [2024-11-04 14:00:16.160587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.252 [2024-11-04 14:00:16.160613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:29.252 [2024-11-04 14:00:16.160635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:29.252 [2024-11-04 14:00:16.160649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.511 [2024-11-04 14:00:16.204086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.511 [2024-11-04 14:00:16.204149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:29.511 [2024-11-04 14:00:16.204175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.304 ms 00:26:29.511 [2024-11-04 14:00:16.204191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.511 [2024-11-04 14:00:16.248835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.511 [2024-11-04 14:00:16.248947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:29.511 [2024-11-04 14:00:16.248976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.469 ms 00:26:29.511 [2024-11-04 14:00:16.248991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.511 [2024-11-04 14:00:16.250019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.511 [2024-11-04 14:00:16.250071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:29.511 [2024-11-04 14:00:16.250091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:29.511 [2024-11-04 14:00:16.250110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.511 [2024-11-04 14:00:16.380189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.511 [2024-11-04 14:00:16.380273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:29.511 [2024-11-04 14:00:16.380302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.969 ms 00:26:29.511 [2024-11-04 14:00:16.380323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.511 [2024-11-04 14:00:16.424358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.511 [2024-11-04 14:00:16.424446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:29.511 [2024-11-04 14:00:16.424472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.896 ms 00:26:29.511 [2024-11-04 14:00:16.424487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.769 [2024-11-04 14:00:16.465302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.769 [2024-11-04 14:00:16.465377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:29.769 [2024-11-04 14:00:16.465403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.733 ms 00:26:29.769 [2024-11-04 14:00:16.465417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.769 [2024-11-04 14:00:16.507266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.769 [2024-11-04 14:00:16.507335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:29.769 [2024-11-04 14:00:16.507358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.780 ms 00:26:29.769 [2024-11-04 14:00:16.507371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.769 [2024-11-04 14:00:16.507479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.769 [2024-11-04 14:00:16.507498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:29.769 [2024-11-04 14:00:16.507532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:29.769 [2024-11-04 14:00:16.507550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.770 [2024-11-04 14:00:16.507862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.770 [2024-11-04 14:00:16.507890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:29.770 [2024-11-04 14:00:16.507908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:29.770 [2024-11-04 14:00:16.507927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.770 [2024-11-04 14:00:16.510157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4938.377 ms, result 0 00:26:29.770 { 00:26:29.770 "name": "ftl0", 00:26:29.770 "uuid": "b4d732ee-44ac-43ed-b057-e9820a577d87" 00:26:29.770 } 00:26:29.770 14:00:16 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:29.770 14:00:16 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:30.027 14:00:16 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:30.027 14:00:16 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:30.284 [2024-11-04 14:00:17.044385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.044466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:30.284 [2024-11-04 14:00:17.044486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:30.284 [2024-11-04 14:00:17.044519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.044594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:30.284 [2024-11-04 14:00:17.049362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.049415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:30.284 [2024-11-04 14:00:17.049446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:26:30.284 [2024-11-04 14:00:17.049464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.049846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.049888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:30.284 [2024-11-04 14:00:17.049908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:26:30.284 [2024-11-04 14:00:17.049922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.052668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.052697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:30.284 [2024-11-04 14:00:17.052736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:26:30.284 [2024-11-04 14:00:17.052756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.058135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.058184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:30.284 [2024-11-04 14:00:17.058207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.340 ms 00:26:30.284 [2024-11-04 14:00:17.058220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.097098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.097156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:30.284 [2024-11-04 14:00:17.097179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.806 ms 00:26:30.284 [2024-11-04 14:00:17.097193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.124047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.124139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:30.284 [2024-11-04 14:00:17.124169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.785 ms 00:26:30.284 [2024-11-04 14:00:17.124183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.124431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.124452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:30.284 [2024-11-04 14:00:17.124474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:26:30.284 [2024-11-04 14:00:17.124492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.163915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.163974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:30.284 [2024-11-04 14:00:17.163997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.384 ms 00:26:30.284 [2024-11-04 14:00:17.164009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.284 [2024-11-04 14:00:17.202643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.284 [2024-11-04 14:00:17.202704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:30.284 [2024-11-04 14:00:17.202726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.571 ms 00:26:30.284 [2024-11-04 14:00:17.202739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.543 [2024-11-04 14:00:17.242607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.543 [2024-11-04 14:00:17.242662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:30.543 [2024-11-04 14:00:17.242684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.803 ms 00:26:30.543 [2024-11-04 14:00:17.242696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.543 [2024-11-04 14:00:17.281010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.543 [2024-11-04 14:00:17.281075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:30.543 [2024-11-04 14:00:17.281099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.165 ms 00:26:30.543 [2024-11-04 14:00:17.281112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.543 [2024-11-04 14:00:17.281174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:30.543 [2024-11-04 14:00:17.281218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.281997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:30.543 [2024-11-04 14:00:17.282666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.282988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:30.544 [2024-11-04 14:00:17.283227] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:30.544 [2024-11-04 14:00:17.283258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4d732ee-44ac-43ed-b057-e9820a577d87 00:26:30.544 [2024-11-04 14:00:17.283277] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:30.544 [2024-11-04 14:00:17.283296] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:30.544 [2024-11-04 14:00:17.283318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:30.544 [2024-11-04 14:00:17.283335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:30.544 [2024-11-04 14:00:17.283355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:30.544 [2024-11-04 14:00:17.283372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:30.544 [2024-11-04 14:00:17.283389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:30.544 [2024-11-04 14:00:17.283405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:30.544 [2024-11-04 14:00:17.283417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:30.544 [2024-11-04 14:00:17.283435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.544 [2024-11-04 14:00:17.283449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:30.544 [2024-11-04 14:00:17.283466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:26:30.544 [2024-11-04 14:00:17.283487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.306638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.544 [2024-11-04 14:00:17.306707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:30.544 [2024-11-04 14:00:17.306734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.027 ms 00:26:30.544 [2024-11-04 14:00:17.306747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.307359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.544 [2024-11-04 14:00:17.307390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:30.544 [2024-11-04 14:00:17.307412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:26:30.544 [2024-11-04 14:00:17.307425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.381669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.544 [2024-11-04 14:00:17.381749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.544 [2024-11-04 14:00:17.381774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.544 [2024-11-04 14:00:17.381790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.381891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.544 [2024-11-04 14:00:17.381930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.544 [2024-11-04 14:00:17.381969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.544 [2024-11-04 14:00:17.381982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.382130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.544 [2024-11-04 14:00:17.382147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.544 [2024-11-04 14:00:17.382165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.544 [2024-11-04 14:00:17.382179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.544 [2024-11-04 14:00:17.382211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.544 [2024-11-04 14:00:17.382233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.544 [2024-11-04 14:00:17.382249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.544 [2024-11-04 14:00:17.382267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.521434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.521532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.803 [2024-11-04 14:00:17.521574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.521592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.634555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.634645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.803 [2024-11-04 14:00:17.634668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.634684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.634826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.634841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.803 [2024-11-04 14:00:17.634866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.634879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.634967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.634985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.803 [2024-11-04 14:00:17.635006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.635019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.635149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.635166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.803 [2024-11-04 14:00:17.635182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.635206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.635273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.635288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:30.803 [2024-11-04 14:00:17.635313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.635325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.635381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.635402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.803 [2024-11-04 14:00:17.635418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.635435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.635496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:30.803 [2024-11-04 14:00:17.635510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.803 [2024-11-04 14:00:17.635526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:30.803 [2024-11-04 14:00:17.635538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-11-04 14:00:17.635830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.359 ms, result 0 00:26:30.803 true 00:26:30.803 14:00:17 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77325 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 77325 ']' 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 77325 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77325 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:30.803 killing process with pid 77325 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77325' 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 77325 00:26:30.803 14:00:17 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 77325 00:26:37.403 14:00:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:42.669 262144+0 records in 00:26:42.669 262144+0 records out 00:26:42.669 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.15498 s, 208 MB/s 00:26:42.669 14:00:28 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:44.045 14:00:30 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:44.045 [2024-11-04 14:00:30.710705] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:26:44.045 [2024-11-04 14:00:30.710843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77612 ] 00:26:44.045 [2024-11-04 14:00:30.899598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.303 [2024-11-04 14:00:31.071356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.562 [2024-11-04 14:00:31.455793] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.562 [2024-11-04 14:00:31.455867] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.821 [2024-11-04 14:00:31.623383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.623455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:44.821 [2024-11-04 14:00:31.623482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:44.821 [2024-11-04 14:00:31.623493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.623556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.623585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.821 [2024-11-04 14:00:31.623604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:44.821 [2024-11-04 14:00:31.623615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.623641] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:44.821 [2024-11-04 14:00:31.624873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:44.821 [2024-11-04 14:00:31.624912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.624926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.821 [2024-11-04 14:00:31.624940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:26:44.821 [2024-11-04 14:00:31.624952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.626604] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:44.821 [2024-11-04 14:00:31.647789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.647838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:44.821 [2024-11-04 14:00:31.647854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.184 ms 00:26:44.821 [2024-11-04 14:00:31.647882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.648012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.648032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:44.821 [2024-11-04 14:00:31.648045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:44.821 [2024-11-04 14:00:31.648056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.655490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.655538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.821 [2024-11-04 14:00:31.655552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.337 ms 00:26:44.821 [2024-11-04 14:00:31.655563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.655682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.655698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.821 [2024-11-04 14:00:31.655710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:44.821 [2024-11-04 14:00:31.655721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.655769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.655781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:44.821 [2024-11-04 14:00:31.655794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:44.821 [2024-11-04 14:00:31.655805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.655835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:44.821 [2024-11-04 14:00:31.661280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.661327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.821 [2024-11-04 14:00:31.661360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.453 ms 00:26:44.821 [2024-11-04 14:00:31.661376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.661417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.661430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:44.821 [2024-11-04 14:00:31.661443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:44.821 [2024-11-04 14:00:31.661455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.661530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:44.821 [2024-11-04 14:00:31.661576] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:44.821 [2024-11-04 14:00:31.661626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:44.821 [2024-11-04 14:00:31.661650] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:44.821 [2024-11-04 14:00:31.661764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:44.821 [2024-11-04 14:00:31.661780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:44.821 [2024-11-04 14:00:31.661795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:44.821 [2024-11-04 14:00:31.661810] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:44.821 [2024-11-04 14:00:31.661825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:44.821 [2024-11-04 14:00:31.661839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:44.821 [2024-11-04 14:00:31.661850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:44.821 [2024-11-04 14:00:31.661862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:44.821 [2024-11-04 14:00:31.661873] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:44.821 [2024-11-04 14:00:31.661890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.661902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:44.821 [2024-11-04 14:00:31.661914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:26:44.821 [2024-11-04 14:00:31.661925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.662015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.821 [2024-11-04 14:00:31.662035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:44.821 [2024-11-04 14:00:31.662047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:44.821 [2024-11-04 14:00:31.662069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.821 [2024-11-04 14:00:31.662173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:44.821 [2024-11-04 14:00:31.662193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:44.821 [2024-11-04 14:00:31.662205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.821 [2024-11-04 14:00:31.662216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.821 [2024-11-04 14:00:31.662228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:44.822 [2024-11-04 14:00:31.662238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:44.822 [2024-11-04 14:00:31.662270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.822 [2024-11-04 14:00:31.662290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:44.822 [2024-11-04 14:00:31.662301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:44.822 [2024-11-04 14:00:31.662313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.822 [2024-11-04 14:00:31.662323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:44.822 [2024-11-04 14:00:31.662334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:44.822 [2024-11-04 14:00:31.662355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:44.822 [2024-11-04 14:00:31.662375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:44.822 [2024-11-04 14:00:31.662406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:44.822 [2024-11-04 14:00:31.662438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:44.822 [2024-11-04 14:00:31.662468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:44.822 [2024-11-04 14:00:31.662498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:44.822 [2024-11-04 14:00:31.662528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.822 [2024-11-04 14:00:31.662549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:44.822 [2024-11-04 14:00:31.662559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:44.822 [2024-11-04 14:00:31.662569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.822 [2024-11-04 14:00:31.662593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:44.822 [2024-11-04 14:00:31.662603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:44.822 [2024-11-04 14:00:31.662613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:44.822 [2024-11-04 14:00:31.662634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:44.822 [2024-11-04 14:00:31.662644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662654] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:44.822 [2024-11-04 14:00:31.662666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:44.822 [2024-11-04 14:00:31.662676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.822 [2024-11-04 14:00:31.662698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:44.822 [2024-11-04 14:00:31.662714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:44.822 [2024-11-04 14:00:31.662724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:44.822 [2024-11-04 14:00:31.662735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:44.822 [2024-11-04 14:00:31.662746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:44.822 [2024-11-04 14:00:31.662756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:44.822 [2024-11-04 14:00:31.662768] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:44.822 [2024-11-04 14:00:31.662781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:44.822 [2024-11-04 14:00:31.662805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:44.822 [2024-11-04 14:00:31.662816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:44.822 [2024-11-04 14:00:31.662828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:44.822 [2024-11-04 14:00:31.662839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:44.822 [2024-11-04 14:00:31.662850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:44.822 [2024-11-04 14:00:31.662861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:44.822 [2024-11-04 14:00:31.662872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:44.822 [2024-11-04 14:00:31.662883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:44.822 [2024-11-04 14:00:31.662894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:44.822 [2024-11-04 14:00:31.662951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:44.822 [2024-11-04 14:00:31.662967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:44.822 [2024-11-04 14:00:31.662990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:44.822 [2024-11-04 14:00:31.663001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:44.822 [2024-11-04 14:00:31.663012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:44.822 [2024-11-04 14:00:31.663024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.822 [2024-11-04 14:00:31.663035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:44.822 [2024-11-04 14:00:31.663047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:26:44.822 [2024-11-04 14:00:31.663057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.822 [2024-11-04 14:00:31.708014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.822 [2024-11-04 14:00:31.708091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.822 [2024-11-04 14:00:31.708114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.901 ms 00:26:44.822 [2024-11-04 14:00:31.708128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.822 [2024-11-04 14:00:31.708257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.822 [2024-11-04 14:00:31.708274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:44.822 [2024-11-04 14:00:31.708290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:44.822 [2024-11-04 14:00:31.708304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.769352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.769408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.081 [2024-11-04 14:00:31.769426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.924 ms 00:26:45.081 [2024-11-04 14:00:31.769438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.769504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.769517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.081 [2024-11-04 14:00:31.769530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.081 [2024-11-04 14:00:31.769545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.770144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.770164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.081 [2024-11-04 14:00:31.770176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:26:45.081 [2024-11-04 14:00:31.770186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.770308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.770323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.081 [2024-11-04 14:00:31.770334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:45.081 [2024-11-04 14:00:31.770350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.790756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.790804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.081 [2024-11-04 14:00:31.790824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.384 ms 00:26:45.081 [2024-11-04 14:00:31.790835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.811684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:45.081 [2024-11-04 14:00:31.811741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:45.081 [2024-11-04 14:00:31.811758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.811771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:45.081 [2024-11-04 14:00:31.811785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.788 ms 00:26:45.081 [2024-11-04 14:00:31.811796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.846749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.081 [2024-11-04 14:00:31.846823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:45.081 [2024-11-04 14:00:31.846851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.893 ms 00:26:45.081 [2024-11-04 14:00:31.846863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.081 [2024-11-04 14:00:31.868991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.082 [2024-11-04 14:00:31.869067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:45.082 [2024-11-04 14:00:31.869084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.047 ms 00:26:45.082 [2024-11-04 14:00:31.869095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.082 [2024-11-04 14:00:31.889414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.082 [2024-11-04 14:00:31.889472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:45.082 [2024-11-04 14:00:31.889489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.256 ms 00:26:45.082 [2024-11-04 14:00:31.889500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.082 [2024-11-04 14:00:31.890493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.082 [2024-11-04 14:00:31.890530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.082 [2024-11-04 14:00:31.890544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:26:45.082 [2024-11-04 14:00:31.890556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.082 [2024-11-04 14:00:31.987252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.082 [2024-11-04 14:00:31.987325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:45.082 [2024-11-04 14:00:31.987344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.653 ms 00:26:45.082 [2024-11-04 14:00:31.987369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.082 [2024-11-04 14:00:32.000548] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.340 [2024-11-04 14:00:32.004257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.340 [2024-11-04 14:00:32.004299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.340 [2024-11-04 14:00:32.004318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.812 ms 00:26:45.340 [2024-11-04 14:00:32.004330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.340 [2024-11-04 14:00:32.004467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.340 [2024-11-04 14:00:32.004483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:45.340 [2024-11-04 14:00:32.004497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:45.340 [2024-11-04 14:00:32.004508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.340 [2024-11-04 14:00:32.004610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.340 [2024-11-04 14:00:32.004626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.340 [2024-11-04 14:00:32.004640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:45.340 [2024-11-04 14:00:32.004652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.340 [2024-11-04 14:00:32.004679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.340 [2024-11-04 14:00:32.004691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.340 [2024-11-04 14:00:32.004704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.340 [2024-11-04 14:00:32.004716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.340 [2024-11-04 14:00:32.004754] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:45.340 [2024-11-04 14:00:32.004768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.340 [2024-11-04 14:00:32.004783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:45.340 [2024-11-04 14:00:32.004795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:45.341 [2024-11-04 14:00:32.004807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.341 [2024-11-04 14:00:32.048761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.341 [2024-11-04 14:00:32.048840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.341 [2024-11-04 14:00:32.048861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.916 ms 00:26:45.341 [2024-11-04 14:00:32.048873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.341 [2024-11-04 14:00:32.048991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.341 [2024-11-04 14:00:32.049007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.341 [2024-11-04 14:00:32.049020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:45.341 [2024-11-04 14:00:32.049032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.341 [2024-11-04 14:00:32.050351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.447 ms, result 0 00:26:46.302  [2024-11-04T14:00:34.158Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-04T14:00:35.112Z] Copying: 62/1024 [MB] (31 MBps) [2024-11-04T14:00:36.485Z] Copying: 96/1024 [MB] (33 MBps) [2024-11-04T14:00:37.421Z] Copying: 131/1024 [MB] (35 MBps) [2024-11-04T14:00:38.356Z] Copying: 166/1024 [MB] (34 MBps) [2024-11-04T14:00:39.291Z] Copying: 199/1024 [MB] (32 MBps) [2024-11-04T14:00:40.226Z] Copying: 232/1024 [MB] (33 MBps) [2024-11-04T14:00:41.161Z] Copying: 267/1024 [MB] (34 MBps) [2024-11-04T14:00:42.097Z] Copying: 299/1024 [MB] (32 MBps) [2024-11-04T14:00:43.497Z] Copying: 332/1024 [MB] (32 MBps) [2024-11-04T14:00:44.064Z] Copying: 366/1024 [MB] (33 MBps) [2024-11-04T14:00:45.454Z] Copying: 398/1024 [MB] (32 MBps) [2024-11-04T14:00:46.390Z] Copying: 432/1024 [MB] (33 MBps) [2024-11-04T14:00:47.326Z] Copying: 465/1024 [MB] (33 MBps) [2024-11-04T14:00:48.274Z] Copying: 498/1024 [MB] (32 MBps) [2024-11-04T14:00:49.209Z] Copying: 531/1024 [MB] (33 MBps) [2024-11-04T14:00:50.164Z] Copying: 567/1024 [MB] (35 MBps) [2024-11-04T14:00:51.099Z] Copying: 602/1024 [MB] (35 MBps) [2024-11-04T14:00:52.474Z] Copying: 638/1024 [MB] (35 MBps) [2024-11-04T14:00:53.429Z] Copying: 674/1024 [MB] (36 MBps) [2024-11-04T14:00:54.364Z] Copying: 708/1024 [MB] (33 MBps) [2024-11-04T14:00:55.301Z] Copying: 742/1024 [MB] (34 MBps) [2024-11-04T14:00:56.236Z] Copying: 776/1024 [MB] (34 MBps) [2024-11-04T14:00:57.171Z] Copying: 811/1024 [MB] (35 MBps) [2024-11-04T14:00:58.106Z] Copying: 847/1024 [MB] (35 MBps) [2024-11-04T14:00:59.480Z] Copying: 882/1024 [MB] (34 MBps) [2024-11-04T14:01:00.417Z] Copying: 916/1024 [MB] (34 MBps) [2024-11-04T14:01:01.351Z] Copying: 950/1024 [MB] (33 MBps) [2024-11-04T14:01:02.286Z] Copying: 981/1024 [MB] (31 MBps) [2024-11-04T14:01:02.545Z] Copying: 1014/1024 [MB] (33 MBps) [2024-11-04T14:01:02.545Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-04 14:01:02.348240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.348297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:15.623 [2024-11-04 14:01:02.348316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:15.623 [2024-11-04 14:01:02.348329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.348355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:15.623 [2024-11-04 14:01:02.352808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.352875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:15.623 [2024-11-04 14:01:02.352890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:27:15.623 [2024-11-04 14:01:02.352902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.354520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.354712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:15.623 [2024-11-04 14:01:02.354739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:27:15.623 [2024-11-04 14:01:02.354751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.370052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.370109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:15.623 [2024-11-04 14:01:02.370123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.272 ms 00:27:15.623 [2024-11-04 14:01:02.370152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.375699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.375747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:15.623 [2024-11-04 14:01:02.375760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.508 ms 00:27:15.623 [2024-11-04 14:01:02.375771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.416605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.416663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:15.623 [2024-11-04 14:01:02.416681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.771 ms 00:27:15.623 [2024-11-04 14:01:02.416693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.438792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.438856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:15.623 [2024-11-04 14:01:02.438876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:27:15.623 [2024-11-04 14:01:02.438889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.439071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.439091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:15.623 [2024-11-04 14:01:02.439114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:27:15.623 [2024-11-04 14:01:02.439125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.484454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.484780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:15.623 [2024-11-04 14:01:02.484812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.302 ms 00:27:15.623 [2024-11-04 14:01:02.484837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-04 14:01:02.529832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-04 14:01:02.529900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:15.623 [2024-11-04 14:01:02.529952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.883 ms 00:27:15.623 [2024-11-04 14:01:02.529976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-04 14:01:02.572534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.883 [2024-11-04 14:01:02.572798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:15.883 [2024-11-04 14:01:02.572963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.488 ms 00:27:15.883 [2024-11-04 14:01:02.573010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-04 14:01:02.613139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.883 [2024-11-04 14:01:02.613360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:15.883 [2024-11-04 14:01:02.613471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.956 ms 00:27:15.883 [2024-11-04 14:01:02.613488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-04 14:01:02.613539] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:15.883 [2024-11-04 14:01:02.613558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:15.883 [2024-11-04 14:01:02.613868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.613998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.614999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-04 14:01:02.615179] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:15.884 [2024-11-04 14:01:02.615197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4d732ee-44ac-43ed-b057-e9820a577d87 00:27:15.884 [2024-11-04 14:01:02.615209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:15.884 [2024-11-04 14:01:02.615224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:15.884 [2024-11-04 14:01:02.615234] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:15.884 [2024-11-04 14:01:02.615245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:15.884 [2024-11-04 14:01:02.615256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:15.884 [2024-11-04 14:01:02.615267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:15.884 [2024-11-04 14:01:02.615278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:15.884 [2024-11-04 14:01:02.615301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:15.884 [2024-11-04 14:01:02.615311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:15.884 [2024-11-04 14:01:02.615323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.884 [2024-11-04 14:01:02.615335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:15.884 [2024-11-04 14:01:02.615348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:27:15.884 [2024-11-04 14:01:02.615359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.884 [2024-11-04 14:01:02.636774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.885 [2024-11-04 14:01:02.636830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:15.885 [2024-11-04 14:01:02.636850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.333 ms 00:27:15.885 [2024-11-04 14:01:02.636864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-04 14:01:02.637388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.885 [2024-11-04 14:01:02.637410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:15.885 [2024-11-04 14:01:02.637422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:27:15.885 [2024-11-04 14:01:02.637433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-04 14:01:02.691595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-04 14:01:02.691682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.885 [2024-11-04 14:01:02.691707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-04 14:01:02.691725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-04 14:01:02.691838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-04 14:01:02.691860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.885 [2024-11-04 14:01:02.691878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-04 14:01:02.691894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-04 14:01:02.692017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-04 14:01:02.692040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.885 [2024-11-04 14:01:02.692058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-04 14:01:02.692076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-04 14:01:02.692107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-04 14:01:02.692119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.885 [2024-11-04 14:01:02.692130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-04 14:01:02.692141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.824004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.824080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.143 [2024-11-04 14:01:02.824102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.824114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.940788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.940878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.143 [2024-11-04 14:01:02.940899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.940915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.143 [2024-11-04 14:01:02.941097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.143 [2024-11-04 14:01:02.941199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.143 [2024-11-04 14:01:02.941392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.143 [2024-11-04 14:01:02.941469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.143 [2024-11-04 14:01:02.941603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.143 [2024-11-04 14:01:02.941682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.143 [2024-11-04 14:01:02.941693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.143 [2024-11-04 14:01:02.941704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.143 [2024-11-04 14:01:02.941854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.563 ms, result 0 00:27:18.044 00:27:18.044 00:27:18.044 14:01:04 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:18.044 [2024-11-04 14:01:04.682339] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:27:18.044 [2024-11-04 14:01:04.682527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77950 ] 00:27:18.044 [2024-11-04 14:01:04.886818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.302 [2024-11-04 14:01:05.058925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.560 [2024-11-04 14:01:05.456362] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.560 [2024-11-04 14:01:05.456679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.820 [2024-11-04 14:01:05.623083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.623152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.820 [2024-11-04 14:01:05.623179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.820 [2024-11-04 14:01:05.623192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.623254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.623269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.820 [2024-11-04 14:01:05.623285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:18.820 [2024-11-04 14:01:05.623297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.623322] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.820 [2024-11-04 14:01:05.624523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.820 [2024-11-04 14:01:05.624557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.624581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.820 [2024-11-04 14:01:05.624595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:27:18.820 [2024-11-04 14:01:05.624607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.626223] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:18.820 [2024-11-04 14:01:05.649248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.649311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:18.820 [2024-11-04 14:01:05.649330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.024 ms 00:27:18.820 [2024-11-04 14:01:05.649343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.649441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.649456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:18.820 [2024-11-04 14:01:05.649470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:18.820 [2024-11-04 14:01:05.649482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.657221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.657277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.820 [2024-11-04 14:01:05.657295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:27:18.820 [2024-11-04 14:01:05.657310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.657421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.657442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.820 [2024-11-04 14:01:05.657458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:18.820 [2024-11-04 14:01:05.657474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.657537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.657554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.820 [2024-11-04 14:01:05.657590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:18.820 [2024-11-04 14:01:05.657606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.657643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.820 [2024-11-04 14:01:05.663209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.663247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.820 [2024-11-04 14:01:05.663261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.575 ms 00:27:18.820 [2024-11-04 14:01:05.663276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.663317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.663329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.820 [2024-11-04 14:01:05.663341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:18.820 [2024-11-04 14:01:05.663353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.663412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:18.820 [2024-11-04 14:01:05.663437] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:18.820 [2024-11-04 14:01:05.663477] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:18.820 [2024-11-04 14:01:05.663500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:18.820 [2024-11-04 14:01:05.663619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.820 [2024-11-04 14:01:05.663652] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.820 [2024-11-04 14:01:05.663667] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.820 [2024-11-04 14:01:05.663682] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.820 [2024-11-04 14:01:05.663696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.820 [2024-11-04 14:01:05.663709] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.820 [2024-11-04 14:01:05.663721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.820 [2024-11-04 14:01:05.663732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.820 [2024-11-04 14:01:05.663743] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.820 [2024-11-04 14:01:05.663759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.663771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.820 [2024-11-04 14:01:05.663783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:27:18.820 [2024-11-04 14:01:05.663794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.663883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.820 [2024-11-04 14:01:05.663901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.820 [2024-11-04 14:01:05.663914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:18.820 [2024-11-04 14:01:05.663925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.820 [2024-11-04 14:01:05.664037] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.820 [2024-11-04 14:01:05.664058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.820 [2024-11-04 14:01:05.664070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.820 [2024-11-04 14:01:05.664082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.820 [2024-11-04 14:01:05.664105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.820 [2024-11-04 14:01:05.664127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.820 [2024-11-04 14:01:05.664138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.820 [2024-11-04 14:01:05.664160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.820 [2024-11-04 14:01:05.664171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.820 [2024-11-04 14:01:05.664182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.820 [2024-11-04 14:01:05.664193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.820 [2024-11-04 14:01:05.664204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.820 [2024-11-04 14:01:05.664225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.820 [2024-11-04 14:01:05.664247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.820 [2024-11-04 14:01:05.664257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.820 [2024-11-04 14:01:05.664280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.820 [2024-11-04 14:01:05.664290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.820 [2024-11-04 14:01:05.664301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.820 [2024-11-04 14:01:05.664312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.821 [2024-11-04 14:01:05.664333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.821 [2024-11-04 14:01:05.664344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.821 [2024-11-04 14:01:05.664365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.821 [2024-11-04 14:01:05.664375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.821 [2024-11-04 14:01:05.664397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.821 [2024-11-04 14:01:05.664408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.821 [2024-11-04 14:01:05.664430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.821 [2024-11-04 14:01:05.664441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.821 [2024-11-04 14:01:05.664452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.821 [2024-11-04 14:01:05.664462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.821 [2024-11-04 14:01:05.664473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.821 [2024-11-04 14:01:05.664484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.821 [2024-11-04 14:01:05.664506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.821 [2024-11-04 14:01:05.664517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.821 [2024-11-04 14:01:05.664539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.821 [2024-11-04 14:01:05.664550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.821 [2024-11-04 14:01:05.664562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.821 [2024-11-04 14:01:05.664574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.821 [2024-11-04 14:01:05.664601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.821 [2024-11-04 14:01:05.664612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.821 [2024-11-04 14:01:05.664623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.821 [2024-11-04 14:01:05.664634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.821 [2024-11-04 14:01:05.664650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.821 [2024-11-04 14:01:05.664663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.821 [2024-11-04 14:01:05.664677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.821 [2024-11-04 14:01:05.664703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.821 [2024-11-04 14:01:05.664715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.821 [2024-11-04 14:01:05.664727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.821 [2024-11-04 14:01:05.664739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.821 [2024-11-04 14:01:05.664751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.821 [2024-11-04 14:01:05.664763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.821 [2024-11-04 14:01:05.664775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.821 [2024-11-04 14:01:05.664786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.821 [2024-11-04 14:01:05.664798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.821 [2024-11-04 14:01:05.664886] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.821 [2024-11-04 14:01:05.664904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.821 [2024-11-04 14:01:05.664936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.821 [2024-11-04 14:01:05.664949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.821 [2024-11-04 14:01:05.664962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.821 [2024-11-04 14:01:05.664975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.821 [2024-11-04 14:01:05.664987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.821 [2024-11-04 14:01:05.664999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:27:18.821 [2024-11-04 14:01:05.665011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.821 [2024-11-04 14:01:05.711584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.821 [2024-11-04 14:01:05.711662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.821 [2024-11-04 14:01:05.711681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.511 ms 00:27:18.821 [2024-11-04 14:01:05.711693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.821 [2024-11-04 14:01:05.711816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.821 [2024-11-04 14:01:05.711829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:18.821 [2024-11-04 14:01:05.711842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:18.821 [2024-11-04 14:01:05.711853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.079 [2024-11-04 14:01:05.778541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.079 [2024-11-04 14:01:05.778620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.079 [2024-11-04 14:01:05.778639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.589 ms 00:27:19.079 [2024-11-04 14:01:05.778651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.079 [2024-11-04 14:01:05.778734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.079 [2024-11-04 14:01:05.778748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.079 [2024-11-04 14:01:05.778761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:19.079 [2024-11-04 14:01:05.778795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.079 [2024-11-04 14:01:05.779380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.079 [2024-11-04 14:01:05.779403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.080 [2024-11-04 14:01:05.779416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:27:19.080 [2024-11-04 14:01:05.779428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.779564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.779580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.080 [2024-11-04 14:01:05.779593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:19.080 [2024-11-04 14:01:05.779611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.801069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.801136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.080 [2024-11-04 14:01:05.801159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.416 ms 00:27:19.080 [2024-11-04 14:01:05.801171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.823723] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:19.080 [2024-11-04 14:01:05.824073] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:19.080 [2024-11-04 14:01:05.824103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.824116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:19.080 [2024-11-04 14:01:05.824131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.753 ms 00:27:19.080 [2024-11-04 14:01:05.824143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.862025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.862131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:19.080 [2024-11-04 14:01:05.862151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.806 ms 00:27:19.080 [2024-11-04 14:01:05.862164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.885498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.885584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:19.080 [2024-11-04 14:01:05.885604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.240 ms 00:27:19.080 [2024-11-04 14:01:05.885616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.909125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.909323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:19.080 [2024-11-04 14:01:05.909352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.431 ms 00:27:19.080 [2024-11-04 14:01:05.909364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.080 [2024-11-04 14:01:05.910362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.080 [2024-11-04 14:01:05.910390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.080 [2024-11-04 14:01:05.910406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:27:19.080 [2024-11-04 14:01:05.910423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.013808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.013875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:19.338 [2024-11-04 14:01:06.013903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.351 ms 00:27:19.338 [2024-11-04 14:01:06.013927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.028707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:19.338 [2024-11-04 14:01:06.032395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.032545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.338 [2024-11-04 14:01:06.032703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.394 ms 00:27:19.338 [2024-11-04 14:01:06.032749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.032945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.033142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:19.338 [2024-11-04 14:01:06.033225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:19.338 [2024-11-04 14:01:06.033245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.033351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.033366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.338 [2024-11-04 14:01:06.033380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:19.338 [2024-11-04 14:01:06.033392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.033419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.033432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.338 [2024-11-04 14:01:06.033444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.338 [2024-11-04 14:01:06.033456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.033496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:19.338 [2024-11-04 14:01:06.033514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.033526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:19.338 [2024-11-04 14:01:06.033537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:19.338 [2024-11-04 14:01:06.033549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.078495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.078716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.338 [2024-11-04 14:01:06.078817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.908 ms 00:27:19.338 [2024-11-04 14:01:06.078868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.078992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.338 [2024-11-04 14:01:06.079094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.338 [2024-11-04 14:01:06.079141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:19.338 [2024-11-04 14:01:06.079176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.338 [2024-11-04 14:01:06.080486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 456.905 ms, result 0 00:27:20.712  [2024-11-04T14:01:08.569Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-04T14:01:09.506Z] Copying: 65/1024 [MB] (32 MBps) [2024-11-04T14:01:10.476Z] Copying: 99/1024 [MB] (34 MBps) [2024-11-04T14:01:11.472Z] Copying: 133/1024 [MB] (34 MBps) [2024-11-04T14:01:12.408Z] Copying: 168/1024 [MB] (34 MBps) [2024-11-04T14:01:13.344Z] Copying: 202/1024 [MB] (33 MBps) [2024-11-04T14:01:14.720Z] Copying: 230/1024 [MB] (28 MBps) [2024-11-04T14:01:15.656Z] Copying: 265/1024 [MB] (34 MBps) [2024-11-04T14:01:16.593Z] Copying: 298/1024 [MB] (33 MBps) [2024-11-04T14:01:17.527Z] Copying: 330/1024 [MB] (31 MBps) [2024-11-04T14:01:18.462Z] Copying: 364/1024 [MB] (34 MBps) [2024-11-04T14:01:19.398Z] Copying: 397/1024 [MB] (32 MBps) [2024-11-04T14:01:20.773Z] Copying: 430/1024 [MB] (33 MBps) [2024-11-04T14:01:21.706Z] Copying: 461/1024 [MB] (30 MBps) [2024-11-04T14:01:22.474Z] Copying: 493/1024 [MB] (31 MBps) [2024-11-04T14:01:23.483Z] Copying: 526/1024 [MB] (32 MBps) [2024-11-04T14:01:24.417Z] Copying: 560/1024 [MB] (34 MBps) [2024-11-04T14:01:25.352Z] Copying: 593/1024 [MB] (32 MBps) [2024-11-04T14:01:26.728Z] Copying: 625/1024 [MB] (32 MBps) [2024-11-04T14:01:27.662Z] Copying: 659/1024 [MB] (33 MBps) [2024-11-04T14:01:28.609Z] Copying: 690/1024 [MB] (31 MBps) [2024-11-04T14:01:29.550Z] Copying: 722/1024 [MB] (31 MBps) [2024-11-04T14:01:30.483Z] Copying: 753/1024 [MB] (31 MBps) [2024-11-04T14:01:31.416Z] Copying: 784/1024 [MB] (30 MBps) [2024-11-04T14:01:32.349Z] Copying: 814/1024 [MB] (30 MBps) [2024-11-04T14:01:33.733Z] Copying: 847/1024 [MB] (32 MBps) [2024-11-04T14:01:34.666Z] Copying: 880/1024 [MB] (33 MBps) [2024-11-04T14:01:35.599Z] Copying: 911/1024 [MB] (31 MBps) [2024-11-04T14:01:36.532Z] Copying: 945/1024 [MB] (34 MBps) [2024-11-04T14:01:37.465Z] Copying: 977/1024 [MB] (31 MBps) [2024-11-04T14:01:38.030Z] Copying: 1010/1024 [MB] (32 MBps) [2024-11-04T14:01:38.030Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-04 14:01:37.857622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.857717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:51.108 [2024-11-04 14:01:37.857748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:51.108 [2024-11-04 14:01:37.857768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.857812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:51.108 [2024-11-04 14:01:37.862456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.862531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:51.108 [2024-11-04 14:01:37.862579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:27:51.108 [2024-11-04 14:01:37.862600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.862933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.862976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:51.108 [2024-11-04 14:01:37.862997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:51.108 [2024-11-04 14:01:37.863014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.866937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.866999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:51.108 [2024-11-04 14:01:37.867022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:27:51.108 [2024-11-04 14:01:37.867040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.876530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.876617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:51.108 [2024-11-04 14:01:37.876641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.431 ms 00:27:51.108 [2024-11-04 14:01:37.876662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.919765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.919851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:51.108 [2024-11-04 14:01:37.919876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.957 ms 00:27:51.108 [2024-11-04 14:01:37.919893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.946444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.946512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:51.108 [2024-11-04 14:01:37.946531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.457 ms 00:27:51.108 [2024-11-04 14:01:37.946544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.946872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.946909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:51.108 [2024-11-04 14:01:37.946923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:27:51.108 [2024-11-04 14:01:37.946935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.108 [2024-11-04 14:01:37.993918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.108 [2024-11-04 14:01:37.994191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:51.108 [2024-11-04 14:01:37.994220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.959 ms 00:27:51.108 [2024-11-04 14:01:37.994233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.367 [2024-11-04 14:01:38.038774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.367 [2024-11-04 14:01:38.038863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:51.367 [2024-11-04 14:01:38.038881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.471 ms 00:27:51.367 [2024-11-04 14:01:38.038911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.367 [2024-11-04 14:01:38.082639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.367 [2024-11-04 14:01:38.082711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:51.367 [2024-11-04 14:01:38.082728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.653 ms 00:27:51.367 [2024-11-04 14:01:38.082739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.367 [2024-11-04 14:01:38.125159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.367 [2024-11-04 14:01:38.125229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:51.367 [2024-11-04 14:01:38.125247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.242 ms 00:27:51.367 [2024-11-04 14:01:38.125258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.367 [2024-11-04 14:01:38.125330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:51.367 [2024-11-04 14:01:38.125351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:51.367 [2024-11-04 14:01:38.125949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.125961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.125973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.125985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.125997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:51.368 [2024-11-04 14:01:38.126681] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:51.368 [2024-11-04 14:01:38.126698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4d732ee-44ac-43ed-b057-e9820a577d87 00:27:51.368 [2024-11-04 14:01:38.126710] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:51.368 [2024-11-04 14:01:38.126721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:51.368 [2024-11-04 14:01:38.126733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:51.368 [2024-11-04 14:01:38.126745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:51.368 [2024-11-04 14:01:38.126756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:51.368 [2024-11-04 14:01:38.126768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:51.368 [2024-11-04 14:01:38.126792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:51.368 [2024-11-04 14:01:38.126803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:51.368 [2024-11-04 14:01:38.126813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:51.368 [2024-11-04 14:01:38.126825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.368 [2024-11-04 14:01:38.126837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:51.368 [2024-11-04 14:01:38.126850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:27:51.368 [2024-11-04 14:01:38.126861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.151345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.368 [2024-11-04 14:01:38.151412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:51.368 [2024-11-04 14:01:38.151430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.393 ms 00:27:51.368 [2024-11-04 14:01:38.151442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.152150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.368 [2024-11-04 14:01:38.152169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:51.368 [2024-11-04 14:01:38.152182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:27:51.368 [2024-11-04 14:01:38.152204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.214880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.368 [2024-11-04 14:01:38.214950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.368 [2024-11-04 14:01:38.214968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.368 [2024-11-04 14:01:38.214980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.215062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.368 [2024-11-04 14:01:38.215074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.368 [2024-11-04 14:01:38.215086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.368 [2024-11-04 14:01:38.215103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.215227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.368 [2024-11-04 14:01:38.215243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.368 [2024-11-04 14:01:38.215254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.368 [2024-11-04 14:01:38.215265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.368 [2024-11-04 14:01:38.215284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.368 [2024-11-04 14:01:38.215296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.368 [2024-11-04 14:01:38.215307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.368 [2024-11-04 14:01:38.215317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.362158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.362234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.627 [2024-11-04 14:01:38.362252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.362265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.481854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.481934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.627 [2024-11-04 14:01:38.481951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.481964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.627 [2024-11-04 14:01:38.482117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.627 [2024-11-04 14:01:38.482208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.627 [2024-11-04 14:01:38.482369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:51.627 [2024-11-04 14:01:38.482446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.627 [2024-11-04 14:01:38.482523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.627 [2024-11-04 14:01:38.482610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.627 [2024-11-04 14:01:38.482641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.627 [2024-11-04 14:01:38.482654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.627 [2024-11-04 14:01:38.482665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.628 [2024-11-04 14:01:38.482799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 625.149 ms, result 0 00:27:53.001 00:27:53.001 00:27:53.001 14:01:39 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:54.901 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:54.901 14:01:41 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:54.901 [2024-11-04 14:01:41.747601] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:27:54.901 [2024-11-04 14:01:41.747757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78325 ] 00:27:55.158 [2024-11-04 14:01:41.935155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.417 [2024-11-04 14:01:42.113933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.675 [2024-11-04 14:01:42.513641] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.675 [2024-11-04 14:01:42.513723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.934 [2024-11-04 14:01:42.678464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.678526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:55.934 [2024-11-04 14:01:42.678550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:55.934 [2024-11-04 14:01:42.678563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.678649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.678664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.934 [2024-11-04 14:01:42.678680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:55.934 [2024-11-04 14:01:42.678691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.678715] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:55.934 [2024-11-04 14:01:42.679957] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:55.934 [2024-11-04 14:01:42.680157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.680177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.934 [2024-11-04 14:01:42.680191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:27:55.934 [2024-11-04 14:01:42.680202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.681892] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:55.934 [2024-11-04 14:01:42.704614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.704684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:55.934 [2024-11-04 14:01:42.704701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.717 ms 00:27:55.934 [2024-11-04 14:01:42.704713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.704862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.704878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:55.934 [2024-11-04 14:01:42.704891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:55.934 [2024-11-04 14:01:42.704902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.712867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.712923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.934 [2024-11-04 14:01:42.712938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.846 ms 00:27:55.934 [2024-11-04 14:01:42.712950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.713058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.713074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.934 [2024-11-04 14:01:42.713086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:55.934 [2024-11-04 14:01:42.713098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.713156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.713169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:55.934 [2024-11-04 14:01:42.713180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:55.934 [2024-11-04 14:01:42.713191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.713221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:55.934 [2024-11-04 14:01:42.718788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.718841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.934 [2024-11-04 14:01:42.718856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.574 ms 00:27:55.934 [2024-11-04 14:01:42.718872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.718931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.718944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:55.934 [2024-11-04 14:01:42.718956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:55.934 [2024-11-04 14:01:42.718968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.934 [2024-11-04 14:01:42.719049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:55.934 [2024-11-04 14:01:42.719076] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:55.934 [2024-11-04 14:01:42.719116] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:55.934 [2024-11-04 14:01:42.719138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:55.934 [2024-11-04 14:01:42.719240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:55.934 [2024-11-04 14:01:42.719255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:55.934 [2024-11-04 14:01:42.719270] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:55.934 [2024-11-04 14:01:42.719284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:55.934 [2024-11-04 14:01:42.719297] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:55.934 [2024-11-04 14:01:42.719310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:55.934 [2024-11-04 14:01:42.719321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:55.934 [2024-11-04 14:01:42.719332] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:55.934 [2024-11-04 14:01:42.719342] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:55.934 [2024-11-04 14:01:42.719358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.934 [2024-11-04 14:01:42.719369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:55.935 [2024-11-04 14:01:42.719380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:27:55.935 [2024-11-04 14:01:42.719391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.719476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.719488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:55.935 [2024-11-04 14:01:42.719500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:55.935 [2024-11-04 14:01:42.719511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.719643] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:55.935 [2024-11-04 14:01:42.719666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:55.935 [2024-11-04 14:01:42.719678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:55.935 [2024-11-04 14:01:42.719712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:55.935 [2024-11-04 14:01:42.719743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.935 [2024-11-04 14:01:42.719763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:55.935 [2024-11-04 14:01:42.719773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:55.935 [2024-11-04 14:01:42.719783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.935 [2024-11-04 14:01:42.719793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:55.935 [2024-11-04 14:01:42.719804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:55.935 [2024-11-04 14:01:42.719824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:55.935 [2024-11-04 14:01:42.719844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:55.935 [2024-11-04 14:01:42.719889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:55.935 [2024-11-04 14:01:42.719924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:55.935 [2024-11-04 14:01:42.719954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.935 [2024-11-04 14:01:42.719974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:55.935 [2024-11-04 14:01:42.719984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:55.935 [2024-11-04 14:01:42.719994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.935 [2024-11-04 14:01:42.720005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:55.935 [2024-11-04 14:01:42.720015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:55.935 [2024-11-04 14:01:42.720025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.935 [2024-11-04 14:01:42.720035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:55.935 [2024-11-04 14:01:42.720045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:55.935 [2024-11-04 14:01:42.720056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.935 [2024-11-04 14:01:42.720066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:55.935 [2024-11-04 14:01:42.720076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:55.935 [2024-11-04 14:01:42.720086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.720096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:55.935 [2024-11-04 14:01:42.720106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:55.935 [2024-11-04 14:01:42.720116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.720125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:55.935 [2024-11-04 14:01:42.720136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:55.935 [2024-11-04 14:01:42.720147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.935 [2024-11-04 14:01:42.720157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.935 [2024-11-04 14:01:42.720169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:55.935 [2024-11-04 14:01:42.720179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:55.935 [2024-11-04 14:01:42.720189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:55.935 [2024-11-04 14:01:42.720199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:55.935 [2024-11-04 14:01:42.720209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:55.935 [2024-11-04 14:01:42.720222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:55.935 [2024-11-04 14:01:42.720233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:55.935 [2024-11-04 14:01:42.720247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:55.935 [2024-11-04 14:01:42.720272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:55.935 [2024-11-04 14:01:42.720283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:55.935 [2024-11-04 14:01:42.720294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:55.935 [2024-11-04 14:01:42.720306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:55.935 [2024-11-04 14:01:42.720317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:55.935 [2024-11-04 14:01:42.720328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:55.935 [2024-11-04 14:01:42.720339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:55.935 [2024-11-04 14:01:42.720351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:55.935 [2024-11-04 14:01:42.720362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:55.935 [2024-11-04 14:01:42.720419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:55.935 [2024-11-04 14:01:42.720436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:55.935 [2024-11-04 14:01:42.720460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:55.935 [2024-11-04 14:01:42.720472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:55.935 [2024-11-04 14:01:42.720483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:55.935 [2024-11-04 14:01:42.720495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.720506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:55.935 [2024-11-04 14:01:42.720518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:27:55.935 [2024-11-04 14:01:42.720529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.766534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.766819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.935 [2024-11-04 14:01:42.766849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.935 ms 00:27:55.935 [2024-11-04 14:01:42.766862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.766982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.766995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:55.935 [2024-11-04 14:01:42.767008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:55.935 [2024-11-04 14:01:42.767019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.837101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.837356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.935 [2024-11-04 14:01:42.837386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.990 ms 00:27:55.935 [2024-11-04 14:01:42.837398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.935 [2024-11-04 14:01:42.837470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.935 [2024-11-04 14:01:42.837485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.936 [2024-11-04 14:01:42.837498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:55.936 [2024-11-04 14:01:42.837517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.936 [2024-11-04 14:01:42.838107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.936 [2024-11-04 14:01:42.838129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.936 [2024-11-04 14:01:42.838142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:27:55.936 [2024-11-04 14:01:42.838154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.936 [2024-11-04 14:01:42.838297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.936 [2024-11-04 14:01:42.838313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.936 [2024-11-04 14:01:42.838325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:27:55.936 [2024-11-04 14:01:42.838344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.194 [2024-11-04 14:01:42.860763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.194 [2024-11-04 14:01:42.860836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.194 [2024-11-04 14:01:42.860877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.391 ms 00:27:56.194 [2024-11-04 14:01:42.860890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.194 [2024-11-04 14:01:42.884443] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:56.194 [2024-11-04 14:01:42.884513] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:56.194 [2024-11-04 14:01:42.884533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.194 [2024-11-04 14:01:42.884547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:56.194 [2024-11-04 14:01:42.884563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.481 ms 00:27:56.194 [2024-11-04 14:01:42.884590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.194 [2024-11-04 14:01:42.921720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.194 [2024-11-04 14:01:42.922076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:56.194 [2024-11-04 14:01:42.922106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.038 ms 00:27:56.194 [2024-11-04 14:01:42.922119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.194 [2024-11-04 14:01:42.946198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.194 [2024-11-04 14:01:42.946296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:56.194 [2024-11-04 14:01:42.946315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.987 ms 00:27:56.194 [2024-11-04 14:01:42.946328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:42.968521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:42.968611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:56.195 [2024-11-04 14:01:42.968630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.116 ms 00:27:56.195 [2024-11-04 14:01:42.968641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:42.969711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:42.969749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.195 [2024-11-04 14:01:42.969764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:27:56.195 [2024-11-04 14:01:42.969782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.075350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.075434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:56.195 [2024-11-04 14:01:43.075465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.529 ms 00:27:56.195 [2024-11-04 14:01:43.075478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.091660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:56.195 [2024-11-04 14:01:43.095407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.095460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:56.195 [2024-11-04 14:01:43.095477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.842 ms 00:27:56.195 [2024-11-04 14:01:43.095491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.095670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.095687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:56.195 [2024-11-04 14:01:43.095701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:56.195 [2024-11-04 14:01:43.095717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.095808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.095823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:56.195 [2024-11-04 14:01:43.095835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:56.195 [2024-11-04 14:01:43.095847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.095875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.095888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:56.195 [2024-11-04 14:01:43.095900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:56.195 [2024-11-04 14:01:43.095912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.195 [2024-11-04 14:01:43.095948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:56.195 [2024-11-04 14:01:43.095965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.195 [2024-11-04 14:01:43.095977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:56.195 [2024-11-04 14:01:43.095989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:56.195 [2024-11-04 14:01:43.096000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.525 [2024-11-04 14:01:43.141800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.525 [2024-11-04 14:01:43.141882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:56.525 [2024-11-04 14:01:43.141902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.769 ms 00:27:56.525 [2024-11-04 14:01:43.141928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.525 [2024-11-04 14:01:43.142050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.525 [2024-11-04 14:01:43.142066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:56.525 [2024-11-04 14:01:43.142079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:56.525 [2024-11-04 14:01:43.142092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.525 [2024-11-04 14:01:43.143698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 464.666 ms, result 0 00:27:57.460  [2024-11-04T14:01:45.318Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-04T14:01:46.257Z] Copying: 65/1024 [MB] (31 MBps) [2024-11-04T14:01:47.190Z] Copying: 101/1024 [MB] (35 MBps) [2024-11-04T14:01:48.565Z] Copying: 135/1024 [MB] (34 MBps) [2024-11-04T14:01:49.175Z] Copying: 168/1024 [MB] (33 MBps) [2024-11-04T14:01:50.548Z] Copying: 202/1024 [MB] (33 MBps) [2024-11-04T14:01:51.483Z] Copying: 235/1024 [MB] (33 MBps) [2024-11-04T14:01:52.418Z] Copying: 270/1024 [MB] (34 MBps) [2024-11-04T14:01:53.352Z] Copying: 304/1024 [MB] (34 MBps) [2024-11-04T14:01:54.287Z] Copying: 338/1024 [MB] (34 MBps) [2024-11-04T14:01:55.331Z] Copying: 372/1024 [MB] (33 MBps) [2024-11-04T14:01:56.269Z] Copying: 406/1024 [MB] (34 MBps) [2024-11-04T14:01:57.206Z] Copying: 441/1024 [MB] (34 MBps) [2024-11-04T14:01:58.583Z] Copying: 476/1024 [MB] (35 MBps) [2024-11-04T14:01:59.520Z] Copying: 510/1024 [MB] (34 MBps) [2024-11-04T14:02:00.455Z] Copying: 544/1024 [MB] (33 MBps) [2024-11-04T14:02:01.391Z] Copying: 577/1024 [MB] (32 MBps) [2024-11-04T14:02:02.327Z] Copying: 610/1024 [MB] (33 MBps) [2024-11-04T14:02:03.263Z] Copying: 644/1024 [MB] (33 MBps) [2024-11-04T14:02:04.199Z] Copying: 678/1024 [MB] (34 MBps) [2024-11-04T14:02:05.607Z] Copying: 711/1024 [MB] (32 MBps) [2024-11-04T14:02:06.173Z] Copying: 744/1024 [MB] (32 MBps) [2024-11-04T14:02:07.550Z] Copying: 776/1024 [MB] (32 MBps) [2024-11-04T14:02:08.486Z] Copying: 807/1024 [MB] (31 MBps) [2024-11-04T14:02:09.423Z] Copying: 839/1024 [MB] (31 MBps) [2024-11-04T14:02:10.360Z] Copying: 870/1024 [MB] (31 MBps) [2024-11-04T14:02:11.296Z] Copying: 902/1024 [MB] (31 MBps) [2024-11-04T14:02:12.238Z] Copying: 934/1024 [MB] (31 MBps) [2024-11-04T14:02:13.174Z] Copying: 966/1024 [MB] (32 MBps) [2024-11-04T14:02:14.552Z] Copying: 998/1024 [MB] (32 MBps) [2024-11-04T14:02:15.119Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-04T14:02:15.119Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-04 14:02:14.844719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.844800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.197 [2024-11-04 14:02:14.844828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.197 [2024-11-04 14:02:14.844851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:14.846951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.197 [2024-11-04 14:02:14.853281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.853452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.197 [2024-11-04 14:02:14.853477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:28:28.197 [2024-11-04 14:02:14.853491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:14.875524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.875641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.197 [2024-11-04 14:02:14.875668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.183 ms 00:28:28.197 [2024-11-04 14:02:14.875685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:14.901433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.901509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:28.197 [2024-11-04 14:02:14.901534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.707 ms 00:28:28.197 [2024-11-04 14:02:14.901551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:14.909958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.910014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:28.197 [2024-11-04 14:02:14.910034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.339 ms 00:28:28.197 [2024-11-04 14:02:14.910049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:14.969591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:14.969672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:28.197 [2024-11-04 14:02:14.969695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.439 ms 00:28:28.197 [2024-11-04 14:02:14.969713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:15.002546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:15.002899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:28.197 [2024-11-04 14:02:15.002936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.761 ms 00:28:28.197 [2024-11-04 14:02:15.002954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.197 [2024-11-04 14:02:15.111652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.197 [2024-11-04 14:02:15.111766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:28.197 [2024-11-04 14:02:15.111792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.614 ms 00:28:28.197 [2024-11-04 14:02:15.111810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.456 [2024-11-04 14:02:15.173587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.456 [2024-11-04 14:02:15.173900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:28.456 [2024-11-04 14:02:15.173934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.747 ms 00:28:28.456 [2024-11-04 14:02:15.173951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.456 [2024-11-04 14:02:15.230921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.456 [2024-11-04 14:02:15.230989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:28.456 [2024-11-04 14:02:15.231005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.902 ms 00:28:28.456 [2024-11-04 14:02:15.231016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.456 [2024-11-04 14:02:15.268327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.456 [2024-11-04 14:02:15.268383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:28.456 [2024-11-04 14:02:15.268399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.264 ms 00:28:28.456 [2024-11-04 14:02:15.268410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.456 [2024-11-04 14:02:15.311040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.456 [2024-11-04 14:02:15.311099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:28.456 [2024-11-04 14:02:15.311117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.532 ms 00:28:28.456 [2024-11-04 14:02:15.311128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.456 [2024-11-04 14:02:15.311188] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.456 [2024-11-04 14:02:15.311208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116480 / 261120 wr_cnt: 1 state: open 00:28:28.456 [2024-11-04 14:02:15.311224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.311991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.456 [2024-11-04 14:02:15.312073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.457 [2024-11-04 14:02:15.312438] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.457 [2024-11-04 14:02:15.312448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4d732ee-44ac-43ed-b057-e9820a577d87 00:28:28.457 [2024-11-04 14:02:15.312460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116480 00:28:28.457 [2024-11-04 14:02:15.312471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117440 00:28:28.457 [2024-11-04 14:02:15.312483] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116480 00:28:28.457 [2024-11-04 14:02:15.312494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:28:28.457 [2024-11-04 14:02:15.312504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.457 [2024-11-04 14:02:15.312523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.457 [2024-11-04 14:02:15.312545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.457 [2024-11-04 14:02:15.312555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.457 [2024-11-04 14:02:15.312582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.457 [2024-11-04 14:02:15.312594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.457 [2024-11-04 14:02:15.312606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.457 [2024-11-04 14:02:15.312617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:28:28.457 [2024-11-04 14:02:15.312628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.457 [2024-11-04 14:02:15.334908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.457 [2024-11-04 14:02:15.334957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.457 [2024-11-04 14:02:15.334972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:28:28.457 [2024-11-04 14:02:15.334990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.457 [2024-11-04 14:02:15.335539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.457 [2024-11-04 14:02:15.335556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.457 [2024-11-04 14:02:15.335577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:28:28.457 [2024-11-04 14:02:15.335588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.388998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.389060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.715 [2024-11-04 14:02:15.389083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.389093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.389173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.389185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.715 [2024-11-04 14:02:15.389196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.389206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.389295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.389309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.715 [2024-11-04 14:02:15.389320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.389335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.389353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.389363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.715 [2024-11-04 14:02:15.389375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.389385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.519841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.520107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.715 [2024-11-04 14:02:15.520142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.520153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.629794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.715 [2024-11-04 14:02:15.630063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.715 [2024-11-04 14:02:15.630088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.715 [2024-11-04 14:02:15.630099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.715 [2024-11-04 14:02:15.630206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.716 [2024-11-04 14:02:15.630230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.716 [2024-11-04 14:02:15.630330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.716 [2024-11-04 14:02:15.630485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.716 [2024-11-04 14:02:15.630563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.716 [2024-11-04 14:02:15.630665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.716 [2024-11-04 14:02:15.630740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.716 [2024-11-04 14:02:15.630752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.716 [2024-11-04 14:02:15.630762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.716 [2024-11-04 14:02:15.630886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 788.818 ms, result 0 00:28:30.110 00:28:30.110 00:28:30.110 14:02:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:30.368 [2024-11-04 14:02:17.083761] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:28:30.368 [2024-11-04 14:02:17.083891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78672 ] 00:28:30.368 [2024-11-04 14:02:17.256645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.626 [2024-11-04 14:02:17.383074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.884 [2024-11-04 14:02:17.765909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.884 [2024-11-04 14:02:17.765984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.144 [2024-11-04 14:02:17.927838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.928137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.144 [2024-11-04 14:02:17.928176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:31.144 [2024-11-04 14:02:17.928191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.928266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.928282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.144 [2024-11-04 14:02:17.928299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:31.144 [2024-11-04 14:02:17.928312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.928340] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.144 [2024-11-04 14:02:17.929591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.144 [2024-11-04 14:02:17.929626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.929640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.144 [2024-11-04 14:02:17.929653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:28:31.144 [2024-11-04 14:02:17.929665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.931190] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:31.144 [2024-11-04 14:02:17.952357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.952399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:31.144 [2024-11-04 14:02:17.952414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.167 ms 00:28:31.144 [2024-11-04 14:02:17.952425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.952498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.952511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:31.144 [2024-11-04 14:02:17.952523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:31.144 [2024-11-04 14:02:17.952534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.959351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.959386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:31.144 [2024-11-04 14:02:17.959400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.718 ms 00:28:31.144 [2024-11-04 14:02:17.959411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.959497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.959511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:31.144 [2024-11-04 14:02:17.959523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:31.144 [2024-11-04 14:02:17.959534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.959595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.959609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:31.144 [2024-11-04 14:02:17.959619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:31.144 [2024-11-04 14:02:17.959629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.959657] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:31.144 [2024-11-04 14:02:17.964553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.964590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:31.144 [2024-11-04 14:02:17.964603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:28:31.144 [2024-11-04 14:02:17.964617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.964648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.964659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:31.144 [2024-11-04 14:02:17.964670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:31.144 [2024-11-04 14:02:17.964680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.964737] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:31.144 [2024-11-04 14:02:17.964761] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:31.144 [2024-11-04 14:02:17.964798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:31.144 [2024-11-04 14:02:17.964827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:31.144 [2024-11-04 14:02:17.964918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:31.144 [2024-11-04 14:02:17.964941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:31.144 [2024-11-04 14:02:17.964955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:31.144 [2024-11-04 14:02:17.964968] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:31.144 [2024-11-04 14:02:17.964981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:31.144 [2024-11-04 14:02:17.964992] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:31.144 [2024-11-04 14:02:17.965002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:31.144 [2024-11-04 14:02:17.965012] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:31.144 [2024-11-04 14:02:17.965022] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:31.144 [2024-11-04 14:02:17.965037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.965048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:31.144 [2024-11-04 14:02:17.965059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:31.144 [2024-11-04 14:02:17.965069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.965143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.144 [2024-11-04 14:02:17.965154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:31.144 [2024-11-04 14:02:17.965166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:31.144 [2024-11-04 14:02:17.965176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.144 [2024-11-04 14:02:17.965273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:31.144 [2024-11-04 14:02:17.965291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:31.144 [2024-11-04 14:02:17.965302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.144 [2024-11-04 14:02:17.965313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.144 [2024-11-04 14:02:17.965323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:31.144 [2024-11-04 14:02:17.965333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:31.144 [2024-11-04 14:02:17.965342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:31.144 [2024-11-04 14:02:17.965353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:31.144 [2024-11-04 14:02:17.965363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:31.144 [2024-11-04 14:02:17.965389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.144 [2024-11-04 14:02:17.965399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:31.144 [2024-11-04 14:02:17.965409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:31.144 [2024-11-04 14:02:17.965419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.144 [2024-11-04 14:02:17.965430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:31.144 [2024-11-04 14:02:17.965440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:31.144 [2024-11-04 14:02:17.965461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.144 [2024-11-04 14:02:17.965471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:31.145 [2024-11-04 14:02:17.965481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:31.145 [2024-11-04 14:02:17.965512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:31.145 [2024-11-04 14:02:17.965545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:31.145 [2024-11-04 14:02:17.965576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:31.145 [2024-11-04 14:02:17.965628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:31.145 [2024-11-04 14:02:17.965659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.145 [2024-11-04 14:02:17.965680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:31.145 [2024-11-04 14:02:17.965699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:31.145 [2024-11-04 14:02:17.965709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.145 [2024-11-04 14:02:17.965720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:31.145 [2024-11-04 14:02:17.965730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:31.145 [2024-11-04 14:02:17.965740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:31.145 [2024-11-04 14:02:17.965761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:31.145 [2024-11-04 14:02:17.965771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:31.145 [2024-11-04 14:02:17.965792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:31.145 [2024-11-04 14:02:17.965804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.145 [2024-11-04 14:02:17.965826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:31.145 [2024-11-04 14:02:17.965836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:31.145 [2024-11-04 14:02:17.965846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:31.145 [2024-11-04 14:02:17.965856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:31.145 [2024-11-04 14:02:17.965866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:31.145 [2024-11-04 14:02:17.965876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:31.145 [2024-11-04 14:02:17.965888] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:31.145 [2024-11-04 14:02:17.965901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.965914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:31.145 [2024-11-04 14:02:17.965926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:31.145 [2024-11-04 14:02:17.965937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:31.145 [2024-11-04 14:02:17.965949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:31.145 [2024-11-04 14:02:17.965961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:31.145 [2024-11-04 14:02:17.965972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:31.145 [2024-11-04 14:02:17.965983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:31.145 [2024-11-04 14:02:17.965994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:31.145 [2024-11-04 14:02:17.966006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:31.145 [2024-11-04 14:02:17.966017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:31.145 [2024-11-04 14:02:17.966075] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:31.145 [2024-11-04 14:02:17.966091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.145 [2024-11-04 14:02:17.966115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:31.145 [2024-11-04 14:02:17.966126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:31.145 [2024-11-04 14:02:17.966138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:31.145 [2024-11-04 14:02:17.966150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.145 [2024-11-04 14:02:17.966162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:31.145 [2024-11-04 14:02:17.966173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:28:31.145 [2024-11-04 14:02:17.966184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.145 [2024-11-04 14:02:18.004099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.145 [2024-11-04 14:02:18.004152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.145 [2024-11-04 14:02:18.004168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.864 ms 00:28:31.145 [2024-11-04 14:02:18.004179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.145 [2024-11-04 14:02:18.004276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.145 [2024-11-04 14:02:18.004287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.145 [2024-11-04 14:02:18.004298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:31.145 [2024-11-04 14:02:18.004308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.069522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.069595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.404 [2024-11-04 14:02:18.069613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.134 ms 00:28:31.404 [2024-11-04 14:02:18.069625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.069689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.069702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.404 [2024-11-04 14:02:18.069715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:31.404 [2024-11-04 14:02:18.069731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.070244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.070261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.404 [2024-11-04 14:02:18.070274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:28:31.404 [2024-11-04 14:02:18.070286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.070411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.070432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.404 [2024-11-04 14:02:18.070443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:28:31.404 [2024-11-04 14:02:18.070460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.091023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.091209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.404 [2024-11-04 14:02:18.091329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.537 ms 00:28:31.404 [2024-11-04 14:02:18.091370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.112088] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:31.404 [2024-11-04 14:02:18.112321] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:31.404 [2024-11-04 14:02:18.112470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.112563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:31.404 [2024-11-04 14:02:18.112617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.869 ms 00:28:31.404 [2024-11-04 14:02:18.112647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.144833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.145031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:31.404 [2024-11-04 14:02:18.145116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.864 ms 00:28:31.404 [2024-11-04 14:02:18.145155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.404 [2024-11-04 14:02:18.164975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.404 [2024-11-04 14:02:18.165035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:31.405 [2024-11-04 14:02:18.165051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.661 ms 00:28:31.405 [2024-11-04 14:02:18.165063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.185062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.185111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:31.405 [2024-11-04 14:02:18.185128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.952 ms 00:28:31.405 [2024-11-04 14:02:18.185140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.186195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.186239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.405 [2024-11-04 14:02:18.186254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:28:31.405 [2024-11-04 14:02:18.186281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.280448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.280520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:31.405 [2024-11-04 14:02:18.280544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.135 ms 00:28:31.405 [2024-11-04 14:02:18.280555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.292939] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:31.405 [2024-11-04 14:02:18.296291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.296357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.405 [2024-11-04 14:02:18.296374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.652 ms 00:28:31.405 [2024-11-04 14:02:18.296386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.296509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.296523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:31.405 [2024-11-04 14:02:18.296535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:31.405 [2024-11-04 14:02:18.296550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.298302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.298491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.405 [2024-11-04 14:02:18.298515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:28:31.405 [2024-11-04 14:02:18.298527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.298591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.298605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.405 [2024-11-04 14:02:18.298617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:31.405 [2024-11-04 14:02:18.298629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.405 [2024-11-04 14:02:18.298669] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:31.405 [2024-11-04 14:02:18.298686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.405 [2024-11-04 14:02:18.298698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:31.405 [2024-11-04 14:02:18.298709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:31.405 [2024-11-04 14:02:18.298721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.664 [2024-11-04 14:02:18.337623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.664 [2024-11-04 14:02:18.337680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.664 [2024-11-04 14:02:18.337698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.876 ms 00:28:31.664 [2024-11-04 14:02:18.337716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.664 [2024-11-04 14:02:18.337806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.664 [2024-11-04 14:02:18.337820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.664 [2024-11-04 14:02:18.337832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:31.664 [2024-11-04 14:02:18.337843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.664 [2024-11-04 14:02:18.339532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.734 ms, result 0 00:28:33.071  [2024-11-04T14:02:20.931Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-04T14:02:21.868Z] Copying: 58/1024 [MB] (32 MBps) [2024-11-04T14:02:22.802Z] Copying: 91/1024 [MB] (32 MBps) [2024-11-04T14:02:23.748Z] Copying: 122/1024 [MB] (31 MBps) [2024-11-04T14:02:24.683Z] Copying: 151/1024 [MB] (28 MBps) [2024-11-04T14:02:25.619Z] Copying: 182/1024 [MB] (31 MBps) [2024-11-04T14:02:26.996Z] Copying: 213/1024 [MB] (31 MBps) [2024-11-04T14:02:27.930Z] Copying: 243/1024 [MB] (29 MBps) [2024-11-04T14:02:28.867Z] Copying: 274/1024 [MB] (31 MBps) [2024-11-04T14:02:29.805Z] Copying: 305/1024 [MB] (30 MBps) [2024-11-04T14:02:30.742Z] Copying: 335/1024 [MB] (30 MBps) [2024-11-04T14:02:31.679Z] Copying: 367/1024 [MB] (31 MBps) [2024-11-04T14:02:32.614Z] Copying: 397/1024 [MB] (30 MBps) [2024-11-04T14:02:33.990Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-04T14:02:34.926Z] Copying: 454/1024 [MB] (28 MBps) [2024-11-04T14:02:35.863Z] Copying: 483/1024 [MB] (29 MBps) [2024-11-04T14:02:36.835Z] Copying: 514/1024 [MB] (31 MBps) [2024-11-04T14:02:37.770Z] Copying: 546/1024 [MB] (31 MBps) [2024-11-04T14:02:38.707Z] Copying: 577/1024 [MB] (31 MBps) [2024-11-04T14:02:39.642Z] Copying: 609/1024 [MB] (31 MBps) [2024-11-04T14:02:41.015Z] Copying: 641/1024 [MB] (32 MBps) [2024-11-04T14:02:41.951Z] Copying: 674/1024 [MB] (33 MBps) [2024-11-04T14:02:42.886Z] Copying: 705/1024 [MB] (30 MBps) [2024-11-04T14:02:43.819Z] Copying: 734/1024 [MB] (29 MBps) [2024-11-04T14:02:44.753Z] Copying: 767/1024 [MB] (32 MBps) [2024-11-04T14:02:45.689Z] Copying: 799/1024 [MB] (32 MBps) [2024-11-04T14:02:46.624Z] Copying: 832/1024 [MB] (32 MBps) [2024-11-04T14:02:47.997Z] Copying: 865/1024 [MB] (32 MBps) [2024-11-04T14:02:48.972Z] Copying: 895/1024 [MB] (30 MBps) [2024-11-04T14:02:49.912Z] Copying: 927/1024 [MB] (31 MBps) [2024-11-04T14:02:50.848Z] Copying: 961/1024 [MB] (33 MBps) [2024-11-04T14:02:51.781Z] Copying: 994/1024 [MB] (33 MBps) [2024-11-04T14:02:51.781Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-04 14:02:51.729793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.859 [2024-11-04 14:02:51.729870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:04.859 [2024-11-04 14:02:51.729896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:04.859 [2024-11-04 14:02:51.729915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.859 [2024-11-04 14:02:51.729960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:04.859 [2024-11-04 14:02:51.737302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.859 [2024-11-04 14:02:51.737362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:04.859 [2024-11-04 14:02:51.737382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.313 ms 00:29:04.859 [2024-11-04 14:02:51.737400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.859 [2024-11-04 14:02:51.737738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.859 [2024-11-04 14:02:51.737761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:04.859 [2024-11-04 14:02:51.737778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:04.859 [2024-11-04 14:02:51.737794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.859 [2024-11-04 14:02:51.744523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.859 [2024-11-04 14:02:51.744600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:04.859 [2024-11-04 14:02:51.744622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.685 ms 00:29:04.859 [2024-11-04 14:02:51.744641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.859 [2024-11-04 14:02:51.754307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.859 [2024-11-04 14:02:51.754362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:04.859 [2024-11-04 14:02:51.754383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.573 ms 00:29:04.859 [2024-11-04 14:02:51.754400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.117 [2024-11-04 14:02:51.814088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.117 [2024-11-04 14:02:51.814161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:05.117 [2024-11-04 14:02:51.814185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.581 ms 00:29:05.117 [2024-11-04 14:02:51.814202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.117 [2024-11-04 14:02:51.846448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.117 [2024-11-04 14:02:51.846526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:05.117 [2024-11-04 14:02:51.846550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.171 ms 00:29:05.117 [2024-11-04 14:02:51.846584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.117 [2024-11-04 14:02:51.962686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.117 [2024-11-04 14:02:51.962760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:05.117 [2024-11-04 14:02:51.962780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.028 ms 00:29:05.117 [2024-11-04 14:02:51.962794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.117 [2024-11-04 14:02:52.001979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.117 [2024-11-04 14:02:52.002058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:05.117 [2024-11-04 14:02:52.002076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.163 ms 00:29:05.117 [2024-11-04 14:02:52.002086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.392 [2024-11-04 14:02:52.040163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.392 [2024-11-04 14:02:52.040219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:05.392 [2024-11-04 14:02:52.040249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.021 ms 00:29:05.392 [2024-11-04 14:02:52.040260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.392 [2024-11-04 14:02:52.077753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.392 [2024-11-04 14:02:52.078000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:05.392 [2024-11-04 14:02:52.078027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.447 ms 00:29:05.392 [2024-11-04 14:02:52.078038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.392 [2024-11-04 14:02:52.115984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.392 [2024-11-04 14:02:52.116041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:05.392 [2024-11-04 14:02:52.116057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.850 ms 00:29:05.392 [2024-11-04 14:02:52.116067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.392 [2024-11-04 14:02:52.116111] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.392 [2024-11-04 14:02:52.116130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:05.392 [2024-11-04 14:02:52.116144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.392 [2024-11-04 14:02:52.116211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.116990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.393 [2024-11-04 14:02:52.117272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.394 [2024-11-04 14:02:52.117283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.394 [2024-11-04 14:02:52.117294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.394 [2024-11-04 14:02:52.117305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.394 [2024-11-04 14:02:52.117325] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.394 [2024-11-04 14:02:52.117335] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4d732ee-44ac-43ed-b057-e9820a577d87 00:29:05.394 [2024-11-04 14:02:52.117347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:05.394 [2024-11-04 14:02:52.117357] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15552 00:29:05.394 [2024-11-04 14:02:52.117368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14592 00:29:05.394 [2024-11-04 14:02:52.117379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0658 00:29:05.394 [2024-11-04 14:02:52.117390] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.394 [2024-11-04 14:02:52.117407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.394 [2024-11-04 14:02:52.117418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.394 [2024-11-04 14:02:52.117439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.394 [2024-11-04 14:02:52.117449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.394 [2024-11-04 14:02:52.117461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.394 [2024-11-04 14:02:52.117476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.394 [2024-11-04 14:02:52.117493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:29:05.394 [2024-11-04 14:02:52.117506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.139098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.394 [2024-11-04 14:02:52.139145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.394 [2024-11-04 14:02:52.139160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.532 ms 00:29:05.394 [2024-11-04 14:02:52.139193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.139780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.394 [2024-11-04 14:02:52.139799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.394 [2024-11-04 14:02:52.139812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:29:05.394 [2024-11-04 14:02:52.139823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.196428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.394 [2024-11-04 14:02:52.196700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.394 [2024-11-04 14:02:52.196734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.394 [2024-11-04 14:02:52.196745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.196842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.394 [2024-11-04 14:02:52.196856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.394 [2024-11-04 14:02:52.196869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.394 [2024-11-04 14:02:52.196881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.196994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.394 [2024-11-04 14:02:52.197010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.394 [2024-11-04 14:02:52.197023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.394 [2024-11-04 14:02:52.197040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.394 [2024-11-04 14:02:52.197060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.394 [2024-11-04 14:02:52.197072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.394 [2024-11-04 14:02:52.197084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.394 [2024-11-04 14:02:52.197096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.652 [2024-11-04 14:02:52.337759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.652 [2024-11-04 14:02:52.337817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.652 [2024-11-04 14:02:52.337843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.652 [2024-11-04 14:02:52.337855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.652 [2024-11-04 14:02:52.450168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.652 [2024-11-04 14:02:52.450433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.652 [2024-11-04 14:02:52.450462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.652 [2024-11-04 14:02:52.450475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.652 [2024-11-04 14:02:52.450601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.652 [2024-11-04 14:02:52.450618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.653 [2024-11-04 14:02:52.450632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.450644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.450708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.653 [2024-11-04 14:02:52.450721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.653 [2024-11-04 14:02:52.450734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.450745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.450879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.653 [2024-11-04 14:02:52.450893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.653 [2024-11-04 14:02:52.450905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.450916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.450958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.653 [2024-11-04 14:02:52.450972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:05.653 [2024-11-04 14:02:52.450984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.450995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.451033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.653 [2024-11-04 14:02:52.451045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.653 [2024-11-04 14:02:52.451056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.451068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.451116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.653 [2024-11-04 14:02:52.451129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.653 [2024-11-04 14:02:52.451141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.653 [2024-11-04 14:02:52.451152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.653 [2024-11-04 14:02:52.451285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 721.454 ms, result 0 00:29:07.027 00:29:07.027 00:29:07.027 14:02:53 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:08.952 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77325 00:29:08.952 14:02:55 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 77325 ']' 00:29:08.952 14:02:55 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 77325 00:29:08.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77325) - No such process 00:29:08.952 Process with pid 77325 is not found 00:29:08.952 Remove shared memory files 00:29:08.952 14:02:55 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 77325 is not found' 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:08.952 14:02:55 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:08.952 ************************************ 00:29:08.952 END TEST ftl_restore 00:29:08.952 ************************************ 00:29:08.952 00:29:08.952 real 2m50.573s 00:29:08.952 user 2m37.345s 00:29:08.952 sys 0m16.614s 00:29:08.952 14:02:55 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:08.952 14:02:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:08.952 14:02:55 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:08.952 14:02:55 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:29:08.952 14:02:55 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:08.952 14:02:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:08.952 ************************************ 00:29:08.952 START TEST ftl_dirty_shutdown 00:29:08.952 ************************************ 00:29:08.952 14:02:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:09.212 * Looking for test storage... 00:29:09.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.212 14:02:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:09.212 14:02:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:09.212 14:02:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.212 --rc genhtml_branch_coverage=1 00:29:09.212 --rc genhtml_function_coverage=1 00:29:09.212 --rc genhtml_legend=1 00:29:09.212 --rc geninfo_all_blocks=1 00:29:09.212 --rc geninfo_unexecuted_blocks=1 00:29:09.212 00:29:09.212 ' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.212 --rc genhtml_branch_coverage=1 00:29:09.212 --rc genhtml_function_coverage=1 00:29:09.212 --rc genhtml_legend=1 00:29:09.212 --rc geninfo_all_blocks=1 00:29:09.212 --rc geninfo_unexecuted_blocks=1 00:29:09.212 00:29:09.212 ' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.212 --rc genhtml_branch_coverage=1 00:29:09.212 --rc genhtml_function_coverage=1 00:29:09.212 --rc genhtml_legend=1 00:29:09.212 --rc geninfo_all_blocks=1 00:29:09.212 --rc geninfo_unexecuted_blocks=1 00:29:09.212 00:29:09.212 ' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.212 --rc genhtml_branch_coverage=1 00:29:09.212 --rc genhtml_function_coverage=1 00:29:09.212 --rc genhtml_legend=1 00:29:09.212 --rc geninfo_all_blocks=1 00:29:09.212 --rc geninfo_unexecuted_blocks=1 00:29:09.212 00:29:09.212 ' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79128 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79128 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79128 ']' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.212 14:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 [2024-11-04 14:02:56.211992] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:29:09.471 [2024-11-04 14:02:56.212412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79128 ] 00:29:09.730 [2024-11-04 14:02:56.415485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.730 [2024-11-04 14:02:56.586366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:10.666 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:29:11.233 14:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:11.492 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:11.492 { 00:29:11.492 "name": "nvme0n1", 00:29:11.492 "aliases": [ 00:29:11.492 "ec19869c-1f13-4f9a-aec2-236dfd43be2e" 00:29:11.492 ], 00:29:11.492 "product_name": "NVMe disk", 00:29:11.492 "block_size": 4096, 00:29:11.492 "num_blocks": 1310720, 00:29:11.492 "uuid": "ec19869c-1f13-4f9a-aec2-236dfd43be2e", 00:29:11.492 "numa_id": -1, 00:29:11.492 "assigned_rate_limits": { 00:29:11.492 "rw_ios_per_sec": 0, 00:29:11.492 "rw_mbytes_per_sec": 0, 00:29:11.492 "r_mbytes_per_sec": 0, 00:29:11.492 "w_mbytes_per_sec": 0 00:29:11.492 }, 00:29:11.492 "claimed": true, 00:29:11.492 "claim_type": "read_many_write_one", 00:29:11.492 "zoned": false, 00:29:11.492 "supported_io_types": { 00:29:11.492 "read": true, 00:29:11.492 "write": true, 00:29:11.492 "unmap": true, 00:29:11.492 "flush": true, 00:29:11.492 "reset": true, 00:29:11.492 "nvme_admin": true, 00:29:11.492 "nvme_io": true, 00:29:11.492 "nvme_io_md": false, 00:29:11.492 "write_zeroes": true, 00:29:11.492 "zcopy": false, 00:29:11.492 "get_zone_info": false, 00:29:11.492 "zone_management": false, 00:29:11.492 "zone_append": false, 00:29:11.492 "compare": true, 00:29:11.492 "compare_and_write": false, 00:29:11.492 "abort": true, 00:29:11.492 "seek_hole": false, 00:29:11.492 "seek_data": false, 00:29:11.492 "copy": true, 00:29:11.492 "nvme_iov_md": false 00:29:11.492 }, 00:29:11.492 "driver_specific": { 00:29:11.492 "nvme": [ 00:29:11.492 { 00:29:11.492 "pci_address": "0000:00:11.0", 00:29:11.492 "trid": { 00:29:11.492 "trtype": "PCIe", 00:29:11.492 "traddr": "0000:00:11.0" 00:29:11.492 }, 00:29:11.492 "ctrlr_data": { 00:29:11.492 "cntlid": 0, 00:29:11.492 "vendor_id": "0x1b36", 00:29:11.492 "model_number": "QEMU NVMe Ctrl", 00:29:11.492 "serial_number": "12341", 00:29:11.492 "firmware_revision": "8.0.0", 00:29:11.492 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:11.492 "oacs": { 00:29:11.492 "security": 0, 00:29:11.492 "format": 1, 00:29:11.492 "firmware": 0, 00:29:11.492 "ns_manage": 1 00:29:11.492 }, 00:29:11.492 "multi_ctrlr": false, 00:29:11.492 "ana_reporting": false 00:29:11.492 }, 00:29:11.493 "vs": { 00:29:11.493 "nvme_version": "1.4" 00:29:11.493 }, 00:29:11.493 "ns_data": { 00:29:11.493 "id": 1, 00:29:11.493 "can_share": false 00:29:11.493 } 00:29:11.493 } 00:29:11.493 ], 00:29:11.493 "mp_policy": "active_passive" 00:29:11.493 } 00:29:11.493 } 00:29:11.493 ]' 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:11.493 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:11.752 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=cf189bba-e971-4311-bb67-bb5a9f0b2cc0 00:29:11.752 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:11.752 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf189bba-e971-4311-bb67-bb5a9f0b2cc0 00:29:12.010 14:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:12.272 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f723b8c5-6f54-4ce0-8d00-c89912bcfc5b 00:29:12.272 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f723b8c5-6f54-4ce0-8d00-c89912bcfc5b 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:29:12.531 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:12.792 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:12.792 { 00:29:12.792 "name": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:12.792 "aliases": [ 00:29:12.792 "lvs/nvme0n1p0" 00:29:12.792 ], 00:29:12.792 "product_name": "Logical Volume", 00:29:12.792 "block_size": 4096, 00:29:12.792 "num_blocks": 26476544, 00:29:12.792 "uuid": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:12.792 "assigned_rate_limits": { 00:29:12.792 "rw_ios_per_sec": 0, 00:29:12.792 "rw_mbytes_per_sec": 0, 00:29:12.792 "r_mbytes_per_sec": 0, 00:29:12.792 "w_mbytes_per_sec": 0 00:29:12.792 }, 00:29:12.792 "claimed": false, 00:29:12.792 "zoned": false, 00:29:12.792 "supported_io_types": { 00:29:12.792 "read": true, 00:29:12.792 "write": true, 00:29:12.792 "unmap": true, 00:29:12.792 "flush": false, 00:29:12.792 "reset": true, 00:29:12.792 "nvme_admin": false, 00:29:12.792 "nvme_io": false, 00:29:12.792 "nvme_io_md": false, 00:29:12.792 "write_zeroes": true, 00:29:12.792 "zcopy": false, 00:29:12.792 "get_zone_info": false, 00:29:12.792 "zone_management": false, 00:29:12.792 "zone_append": false, 00:29:12.792 "compare": false, 00:29:12.792 "compare_and_write": false, 00:29:12.792 "abort": false, 00:29:12.792 "seek_hole": true, 00:29:12.792 "seek_data": true, 00:29:12.792 "copy": false, 00:29:12.792 "nvme_iov_md": false 00:29:12.792 }, 00:29:12.792 "driver_specific": { 00:29:12.792 "lvol": { 00:29:12.792 "lvol_store_uuid": "f723b8c5-6f54-4ce0-8d00-c89912bcfc5b", 00:29:12.793 "base_bdev": "nvme0n1", 00:29:12.793 "thin_provision": true, 00:29:12.793 "num_allocated_clusters": 0, 00:29:12.793 "snapshot": false, 00:29:12.793 "clone": false, 00:29:12.793 "esnap_clone": false 00:29:12.793 } 00:29:12.793 } 00:29:12.793 } 00:29:12.793 ]' 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:12.793 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:29:13.054 14:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:13.312 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:13.312 { 00:29:13.312 "name": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:13.312 "aliases": [ 00:29:13.312 "lvs/nvme0n1p0" 00:29:13.312 ], 00:29:13.312 "product_name": "Logical Volume", 00:29:13.312 "block_size": 4096, 00:29:13.312 "num_blocks": 26476544, 00:29:13.312 "uuid": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:13.312 "assigned_rate_limits": { 00:29:13.312 "rw_ios_per_sec": 0, 00:29:13.312 "rw_mbytes_per_sec": 0, 00:29:13.312 "r_mbytes_per_sec": 0, 00:29:13.312 "w_mbytes_per_sec": 0 00:29:13.312 }, 00:29:13.312 "claimed": false, 00:29:13.312 "zoned": false, 00:29:13.312 "supported_io_types": { 00:29:13.312 "read": true, 00:29:13.312 "write": true, 00:29:13.312 "unmap": true, 00:29:13.312 "flush": false, 00:29:13.312 "reset": true, 00:29:13.312 "nvme_admin": false, 00:29:13.312 "nvme_io": false, 00:29:13.312 "nvme_io_md": false, 00:29:13.312 "write_zeroes": true, 00:29:13.312 "zcopy": false, 00:29:13.312 "get_zone_info": false, 00:29:13.312 "zone_management": false, 00:29:13.312 "zone_append": false, 00:29:13.312 "compare": false, 00:29:13.312 "compare_and_write": false, 00:29:13.312 "abort": false, 00:29:13.312 "seek_hole": true, 00:29:13.312 "seek_data": true, 00:29:13.312 "copy": false, 00:29:13.312 "nvme_iov_md": false 00:29:13.312 }, 00:29:13.312 "driver_specific": { 00:29:13.312 "lvol": { 00:29:13.312 "lvol_store_uuid": "f723b8c5-6f54-4ce0-8d00-c89912bcfc5b", 00:29:13.312 "base_bdev": "nvme0n1", 00:29:13.312 "thin_provision": true, 00:29:13.312 "num_allocated_clusters": 0, 00:29:13.312 "snapshot": false, 00:29:13.312 "clone": false, 00:29:13.312 "esnap_clone": false 00:29:13.312 } 00:29:13.312 } 00:29:13.312 } 00:29:13.312 ]' 00:29:13.312 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:13.312 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:29:13.312 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:13.570 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:13.570 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:13.570 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:29:13.570 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:13.570 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:29:13.829 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54050ba0-9c48-4d8a-84fa-354459954a1c 00:29:14.087 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:14.087 { 00:29:14.087 "name": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:14.087 "aliases": [ 00:29:14.087 "lvs/nvme0n1p0" 00:29:14.087 ], 00:29:14.087 "product_name": "Logical Volume", 00:29:14.087 "block_size": 4096, 00:29:14.087 "num_blocks": 26476544, 00:29:14.087 "uuid": "54050ba0-9c48-4d8a-84fa-354459954a1c", 00:29:14.087 "assigned_rate_limits": { 00:29:14.087 "rw_ios_per_sec": 0, 00:29:14.087 "rw_mbytes_per_sec": 0, 00:29:14.087 "r_mbytes_per_sec": 0, 00:29:14.087 "w_mbytes_per_sec": 0 00:29:14.087 }, 00:29:14.087 "claimed": false, 00:29:14.087 "zoned": false, 00:29:14.087 "supported_io_types": { 00:29:14.087 "read": true, 00:29:14.087 "write": true, 00:29:14.087 "unmap": true, 00:29:14.087 "flush": false, 00:29:14.087 "reset": true, 00:29:14.087 "nvme_admin": false, 00:29:14.087 "nvme_io": false, 00:29:14.087 "nvme_io_md": false, 00:29:14.087 "write_zeroes": true, 00:29:14.087 "zcopy": false, 00:29:14.087 "get_zone_info": false, 00:29:14.087 "zone_management": false, 00:29:14.087 "zone_append": false, 00:29:14.087 "compare": false, 00:29:14.087 "compare_and_write": false, 00:29:14.087 "abort": false, 00:29:14.087 "seek_hole": true, 00:29:14.087 "seek_data": true, 00:29:14.087 "copy": false, 00:29:14.087 "nvme_iov_md": false 00:29:14.087 }, 00:29:14.087 "driver_specific": { 00:29:14.087 "lvol": { 00:29:14.087 "lvol_store_uuid": "f723b8c5-6f54-4ce0-8d00-c89912bcfc5b", 00:29:14.087 "base_bdev": "nvme0n1", 00:29:14.087 "thin_provision": true, 00:29:14.087 "num_allocated_clusters": 0, 00:29:14.087 "snapshot": false, 00:29:14.087 "clone": false, 00:29:14.087 "esnap_clone": false 00:29:14.087 } 00:29:14.087 } 00:29:14.087 } 00:29:14.087 ]' 00:29:14.087 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:14.087 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 54050ba0-9c48-4d8a-84fa-354459954a1c --l2p_dram_limit 10' 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:14.088 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 54050ba0-9c48-4d8a-84fa-354459954a1c --l2p_dram_limit 10 -c nvc0n1p0 00:29:14.359 [2024-11-04 14:03:01.172621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.359 [2024-11-04 14:03:01.172694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:14.359 [2024-11-04 14:03:01.172731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:14.359 [2024-11-04 14:03:01.172742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.359 [2024-11-04 14:03:01.172819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.359 [2024-11-04 14:03:01.172832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.359 [2024-11-04 14:03:01.172846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:14.359 [2024-11-04 14:03:01.172856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.359 [2024-11-04 14:03:01.172888] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:14.359 [2024-11-04 14:03:01.174024] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:14.359 [2024-11-04 14:03:01.174054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.359 [2024-11-04 14:03:01.174065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.359 [2024-11-04 14:03:01.174080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:29:14.359 [2024-11-04 14:03:01.174090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.359 [2024-11-04 14:03:01.174135] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d83b1a07-9ea8-46fc-a450-a77fb1d29098 00:29:14.359 [2024-11-04 14:03:01.175626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.359 [2024-11-04 14:03:01.175775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:14.359 [2024-11-04 14:03:01.175798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:14.359 [2024-11-04 14:03:01.175811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.359 [2024-11-04 14:03:01.183330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.359 [2024-11-04 14:03:01.183583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.359 [2024-11-04 14:03:01.183611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.420 ms 00:29:14.360 [2024-11-04 14:03:01.183624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.183744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.183761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.360 [2024-11-04 14:03:01.183772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:14.360 [2024-11-04 14:03:01.183789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.183872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.183888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:14.360 [2024-11-04 14:03:01.183899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:14.360 [2024-11-04 14:03:01.183914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.183941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:14.360 [2024-11-04 14:03:01.188956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.188999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.360 [2024-11-04 14:03:01.189015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:29:14.360 [2024-11-04 14:03:01.189026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.189067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.189078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:14.360 [2024-11-04 14:03:01.189092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:14.360 [2024-11-04 14:03:01.189102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.189149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:14.360 [2024-11-04 14:03:01.189279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:14.360 [2024-11-04 14:03:01.189299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:14.360 [2024-11-04 14:03:01.189313] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:14.360 [2024-11-04 14:03:01.189329] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189341] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189355] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:14.360 [2024-11-04 14:03:01.189366] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:14.360 [2024-11-04 14:03:01.189392] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:14.360 [2024-11-04 14:03:01.189403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:14.360 [2024-11-04 14:03:01.189418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.189429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:14.360 [2024-11-04 14:03:01.189443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:14.360 [2024-11-04 14:03:01.189465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.189543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.360 [2024-11-04 14:03:01.189554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:14.360 [2024-11-04 14:03:01.189585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:14.360 [2024-11-04 14:03:01.189597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.360 [2024-11-04 14:03:01.189707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:14.360 [2024-11-04 14:03:01.189720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:14.360 [2024-11-04 14:03:01.189734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:14.360 [2024-11-04 14:03:01.189767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:14.360 [2024-11-04 14:03:01.189802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.360 [2024-11-04 14:03:01.189824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:14.360 [2024-11-04 14:03:01.189833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:14.360 [2024-11-04 14:03:01.189845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.360 [2024-11-04 14:03:01.189854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:14.360 [2024-11-04 14:03:01.189866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:14.360 [2024-11-04 14:03:01.189875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:14.360 [2024-11-04 14:03:01.189901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:14.360 [2024-11-04 14:03:01.189934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:14.360 [2024-11-04 14:03:01.189964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:14.360 [2024-11-04 14:03:01.189977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.360 [2024-11-04 14:03:01.189987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:14.360 [2024-11-04 14:03:01.189999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.360 [2024-11-04 14:03:01.190019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:14.360 [2024-11-04 14:03:01.190029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.360 [2024-11-04 14:03:01.190050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:14.360 [2024-11-04 14:03:01.190064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.360 [2024-11-04 14:03:01.190085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:14.360 [2024-11-04 14:03:01.190095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:14.360 [2024-11-04 14:03:01.190107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.360 [2024-11-04 14:03:01.190116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:14.360 [2024-11-04 14:03:01.190128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:14.360 [2024-11-04 14:03:01.190137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:14.360 [2024-11-04 14:03:01.190159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:14.360 [2024-11-04 14:03:01.190171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190179] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:14.360 [2024-11-04 14:03:01.190193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:14.360 [2024-11-04 14:03:01.190203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.360 [2024-11-04 14:03:01.190215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.360 [2024-11-04 14:03:01.190226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:14.360 [2024-11-04 14:03:01.190240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:14.360 [2024-11-04 14:03:01.190249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:14.360 [2024-11-04 14:03:01.190261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:14.360 [2024-11-04 14:03:01.190270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:14.360 [2024-11-04 14:03:01.190282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:14.360 [2024-11-04 14:03:01.190296] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:14.360 [2024-11-04 14:03:01.190312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.360 [2024-11-04 14:03:01.190327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:14.360 [2024-11-04 14:03:01.190341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:14.360 [2024-11-04 14:03:01.190352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:14.360 [2024-11-04 14:03:01.190364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:14.360 [2024-11-04 14:03:01.190375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:14.360 [2024-11-04 14:03:01.190388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:14.360 [2024-11-04 14:03:01.190399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:14.360 [2024-11-04 14:03:01.190411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:14.360 [2024-11-04 14:03:01.190422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:14.360 [2024-11-04 14:03:01.190437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:14.360 [2024-11-04 14:03:01.190447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:14.361 [2024-11-04 14:03:01.190462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:14.361 [2024-11-04 14:03:01.190472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:14.361 [2024-11-04 14:03:01.190485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:14.361 [2024-11-04 14:03:01.190495] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:14.361 [2024-11-04 14:03:01.190509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.361 [2024-11-04 14:03:01.190521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.361 [2024-11-04 14:03:01.190534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:14.361 [2024-11-04 14:03:01.190545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:14.361 [2024-11-04 14:03:01.190558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:14.361 [2024-11-04 14:03:01.190578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.361 [2024-11-04 14:03:01.190592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:14.361 [2024-11-04 14:03:01.190602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:29:14.361 [2024-11-04 14:03:01.190615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.361 [2024-11-04 14:03:01.190657] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:14.361 [2024-11-04 14:03:01.190675] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:16.899 [2024-11-04 14:03:03.703002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.703079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:16.899 [2024-11-04 14:03:03.703098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2512.328 ms 00:29:16.899 [2024-11-04 14:03:03.703113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.744215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.744279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.899 [2024-11-04 14:03:03.744298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.737 ms 00:29:16.899 [2024-11-04 14:03:03.744311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.744474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.744491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:16.899 [2024-11-04 14:03:03.744503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:16.899 [2024-11-04 14:03:03.744518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.793210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.793270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.899 [2024-11-04 14:03:03.793287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.642 ms 00:29:16.899 [2024-11-04 14:03:03.793300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.793354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.793373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.899 [2024-11-04 14:03:03.793384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:16.899 [2024-11-04 14:03:03.793397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.793951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.793972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.899 [2024-11-04 14:03:03.793984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:29:16.899 [2024-11-04 14:03:03.793996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.794119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.794134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.899 [2024-11-04 14:03:03.794149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:16.899 [2024-11-04 14:03:03.794165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.899 [2024-11-04 14:03:03.813300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.899 [2024-11-04 14:03:03.813541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.899 [2024-11-04 14:03:03.813589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.111 ms 00:29:16.899 [2024-11-04 14:03:03.813604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.183 [2024-11-04 14:03:03.825856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:17.183 [2024-11-04 14:03:03.829130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.183 [2024-11-04 14:03:03.829162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:17.183 [2024-11-04 14:03:03.829180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.404 ms 00:29:17.183 [2024-11-04 14:03:03.829191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.183 [2024-11-04 14:03:03.910300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.183 [2024-11-04 14:03:03.910375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:17.183 [2024-11-04 14:03:03.910396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.053 ms 00:29:17.183 [2024-11-04 14:03:03.910408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.183 [2024-11-04 14:03:03.910670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.183 [2024-11-04 14:03:03.910690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:17.183 [2024-11-04 14:03:03.910710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:29:17.183 [2024-11-04 14:03:03.910721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.183 [2024-11-04 14:03:03.950680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.183 [2024-11-04 14:03:03.950912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:17.183 [2024-11-04 14:03:03.950946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.848 ms 00:29:17.184 [2024-11-04 14:03:03.950959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.184 [2024-11-04 14:03:03.991246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.184 [2024-11-04 14:03:03.991306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:17.184 [2024-11-04 14:03:03.991328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.183 ms 00:29:17.184 [2024-11-04 14:03:03.991339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.184 [2024-11-04 14:03:03.992156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.184 [2024-11-04 14:03:03.992183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:17.184 [2024-11-04 14:03:03.992199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:29:17.184 [2024-11-04 14:03:03.992212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.108105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.108181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:17.441 [2024-11-04 14:03:04.108207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.783 ms 00:29:17.441 [2024-11-04 14:03:04.108219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.155210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.155474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:17.441 [2024-11-04 14:03:04.155508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.827 ms 00:29:17.441 [2024-11-04 14:03:04.155521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.199791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.199852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:17.441 [2024-11-04 14:03:04.199875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.141 ms 00:29:17.441 [2024-11-04 14:03:04.199890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.243661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.243721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:17.441 [2024-11-04 14:03:04.243742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.616 ms 00:29:17.441 [2024-11-04 14:03:04.243754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.243829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.243843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:17.441 [2024-11-04 14:03:04.243862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:17.441 [2024-11-04 14:03:04.243873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.244014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.441 [2024-11-04 14:03:04.244028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:17.441 [2024-11-04 14:03:04.244046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:17.441 [2024-11-04 14:03:04.244057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.441 [2024-11-04 14:03:04.245365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3072.149 ms, result 0 00:29:17.441 { 00:29:17.441 "name": "ftl0", 00:29:17.441 "uuid": "d83b1a07-9ea8-46fc-a450-a77fb1d29098" 00:29:17.441 } 00:29:17.441 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:17.442 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:17.700 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:17.700 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:17.700 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:18.287 /dev/nbd0 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:18.287 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:18.287 1+0 records in 00:29:18.287 1+0 records out 00:29:18.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340533 s, 12.0 MB/s 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:29:18.287 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:18.287 [2024-11-04 14:03:05.131057] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:29:18.287 [2024-11-04 14:03:05.131254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79276 ] 00:29:18.545 [2024-11-04 14:03:05.335284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.804 [2024-11-04 14:03:05.508847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.180  [2024-11-04T14:03:08.038Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-04T14:03:08.974Z] Copying: 353/1024 [MB] (180 MBps) [2024-11-04T14:03:09.911Z] Copying: 546/1024 [MB] (193 MBps) [2024-11-04T14:03:11.285Z] Copying: 735/1024 [MB] (188 MBps) [2024-11-04T14:03:11.849Z] Copying: 912/1024 [MB] (176 MBps) [2024-11-04T14:03:13.222Z] Copying: 1024/1024 [MB] (average 180 MBps) 00:29:26.300 00:29:26.300 14:03:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:28.201 14:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:28.460 [2024-11-04 14:03:15.147906] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:29:28.460 [2024-11-04 14:03:15.148157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79380 ] 00:29:28.460 [2024-11-04 14:03:15.359153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.717 [2024-11-04 14:03:15.532093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.092  [2024-11-04T14:03:17.950Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-04T14:03:19.324Z] Copying: 30/1024 [MB] (13 MBps) [2024-11-04T14:03:20.257Z] Copying: 48/1024 [MB] (17 MBps) [2024-11-04T14:03:21.217Z] Copying: 66/1024 [MB] (18 MBps) [2024-11-04T14:03:22.153Z] Copying: 86/1024 [MB] (19 MBps) [2024-11-04T14:03:23.084Z] Copying: 106/1024 [MB] (19 MBps) [2024-11-04T14:03:24.031Z] Copying: 126/1024 [MB] (19 MBps) [2024-11-04T14:03:24.967Z] Copying: 145/1024 [MB] (19 MBps) [2024-11-04T14:03:26.345Z] Copying: 163/1024 [MB] (17 MBps) [2024-11-04T14:03:27.282Z] Copying: 179/1024 [MB] (16 MBps) [2024-11-04T14:03:28.216Z] Copying: 195/1024 [MB] (15 MBps) [2024-11-04T14:03:29.152Z] Copying: 213/1024 [MB] (17 MBps) [2024-11-04T14:03:30.107Z] Copying: 231/1024 [MB] (18 MBps) [2024-11-04T14:03:31.095Z] Copying: 250/1024 [MB] (19 MBps) [2024-11-04T14:03:32.043Z] Copying: 269/1024 [MB] (18 MBps) [2024-11-04T14:03:32.980Z] Copying: 288/1024 [MB] (18 MBps) [2024-11-04T14:03:33.919Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-04T14:03:35.298Z] Copying: 326/1024 [MB] (19 MBps) [2024-11-04T14:03:36.234Z] Copying: 344/1024 [MB] (17 MBps) [2024-11-04T14:03:37.169Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-04T14:03:38.106Z] Copying: 380/1024 [MB] (19 MBps) [2024-11-04T14:03:39.041Z] Copying: 399/1024 [MB] (19 MBps) [2024-11-04T14:03:39.976Z] Copying: 418/1024 [MB] (18 MBps) [2024-11-04T14:03:41.351Z] Copying: 437/1024 [MB] (19 MBps) [2024-11-04T14:03:41.916Z] Copying: 455/1024 [MB] (18 MBps) [2024-11-04T14:03:42.927Z] Copying: 474/1024 [MB] (19 MBps) [2024-11-04T14:03:44.300Z] Copying: 490/1024 [MB] (15 MBps) [2024-11-04T14:03:45.288Z] Copying: 508/1024 [MB] (18 MBps) [2024-11-04T14:03:46.225Z] Copying: 524/1024 [MB] (16 MBps) [2024-11-04T14:03:47.167Z] Copying: 543/1024 [MB] (18 MBps) [2024-11-04T14:03:48.103Z] Copying: 561/1024 [MB] (18 MBps) [2024-11-04T14:03:49.099Z] Copying: 578/1024 [MB] (16 MBps) [2024-11-04T14:03:50.035Z] Copying: 595/1024 [MB] (17 MBps) [2024-11-04T14:03:50.968Z] Copying: 614/1024 [MB] (18 MBps) [2024-11-04T14:03:52.343Z] Copying: 632/1024 [MB] (18 MBps) [2024-11-04T14:03:53.277Z] Copying: 649/1024 [MB] (17 MBps) [2024-11-04T14:03:54.213Z] Copying: 667/1024 [MB] (17 MBps) [2024-11-04T14:03:55.151Z] Copying: 685/1024 [MB] (17 MBps) [2024-11-04T14:03:56.086Z] Copying: 703/1024 [MB] (18 MBps) [2024-11-04T14:03:57.021Z] Copying: 722/1024 [MB] (18 MBps) [2024-11-04T14:03:57.999Z] Copying: 740/1024 [MB] (18 MBps) [2024-11-04T14:03:58.932Z] Copying: 759/1024 [MB] (18 MBps) [2024-11-04T14:04:00.303Z] Copying: 777/1024 [MB] (18 MBps) [2024-11-04T14:04:01.271Z] Copying: 794/1024 [MB] (17 MBps) [2024-11-04T14:04:02.206Z] Copying: 813/1024 [MB] (18 MBps) [2024-11-04T14:04:03.143Z] Copying: 832/1024 [MB] (18 MBps) [2024-11-04T14:04:04.080Z] Copying: 850/1024 [MB] (18 MBps) [2024-11-04T14:04:05.030Z] Copying: 869/1024 [MB] (18 MBps) [2024-11-04T14:04:05.965Z] Copying: 886/1024 [MB] (17 MBps) [2024-11-04T14:04:07.341Z] Copying: 904/1024 [MB] (18 MBps) [2024-11-04T14:04:08.277Z] Copying: 923/1024 [MB] (18 MBps) [2024-11-04T14:04:09.213Z] Copying: 942/1024 [MB] (18 MBps) [2024-11-04T14:04:10.157Z] Copying: 961/1024 [MB] (18 MBps) [2024-11-04T14:04:11.091Z] Copying: 979/1024 [MB] (18 MBps) [2024-11-04T14:04:12.025Z] Copying: 998/1024 [MB] (18 MBps) [2024-11-04T14:04:12.283Z] Copying: 1017/1024 [MB] (18 MBps) [2024-11-04T14:04:13.658Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:30:26.736 00:30:26.736 14:04:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:26.736 14:04:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:26.995 14:04:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:27.254 [2024-11-04 14:04:14.103217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.254 [2024-11-04 14:04:14.103289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:27.254 [2024-11-04 14:04:14.103308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:27.254 [2024-11-04 14:04:14.103323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.254 [2024-11-04 14:04:14.103354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:27.254 [2024-11-04 14:04:14.108046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.254 [2024-11-04 14:04:14.108085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:27.254 [2024-11-04 14:04:14.108103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:30:27.254 [2024-11-04 14:04:14.108115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.254 [2024-11-04 14:04:14.110103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.254 [2024-11-04 14:04:14.110298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:27.254 [2024-11-04 14:04:14.110330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:30:27.254 [2024-11-04 14:04:14.110343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.254 [2024-11-04 14:04:14.126381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.254 [2024-11-04 14:04:14.126451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:27.254 [2024-11-04 14:04:14.126472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:30:27.254 [2024-11-04 14:04:14.126485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.254 [2024-11-04 14:04:14.132441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.254 [2024-11-04 14:04:14.132495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:27.254 [2024-11-04 14:04:14.132515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.898 ms 00:30:27.254 [2024-11-04 14:04:14.132526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.175142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.175229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:27.514 [2024-11-04 14:04:14.175251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.474 ms 00:30:27.514 [2024-11-04 14:04:14.175262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.200491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.200780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:27.514 [2024-11-04 14:04:14.200827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.132 ms 00:30:27.514 [2024-11-04 14:04:14.200844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.201069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.201086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:27.514 [2024-11-04 14:04:14.201103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:27.514 [2024-11-04 14:04:14.201115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.246307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.246595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:27.514 [2024-11-04 14:04:14.246629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.151 ms 00:30:27.514 [2024-11-04 14:04:14.246641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.292107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.292413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:27.514 [2024-11-04 14:04:14.292456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.359 ms 00:30:27.514 [2024-11-04 14:04:14.292469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.514 [2024-11-04 14:04:14.337491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.514 [2024-11-04 14:04:14.337563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:27.514 [2024-11-04 14:04:14.337600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.924 ms 00:30:27.515 [2024-11-04 14:04:14.337612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.515 [2024-11-04 14:04:14.382369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.515 [2024-11-04 14:04:14.382463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:27.515 [2024-11-04 14:04:14.382486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.553 ms 00:30:27.515 [2024-11-04 14:04:14.382499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.515 [2024-11-04 14:04:14.382616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:27.515 [2024-11-04 14:04:14.382638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.382992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:27.515 [2024-11-04 14:04:14.383392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.383986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:27.516 [2024-11-04 14:04:14.384140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:27.516 [2024-11-04 14:04:14.384155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d83b1a07-9ea8-46fc-a450-a77fb1d29098 00:30:27.516 [2024-11-04 14:04:14.384168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:27.516 [2024-11-04 14:04:14.384185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:27.516 [2024-11-04 14:04:14.384197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:27.516 [2024-11-04 14:04:14.384216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:27.516 [2024-11-04 14:04:14.384228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:27.516 [2024-11-04 14:04:14.384243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:27.516 [2024-11-04 14:04:14.384255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:27.516 [2024-11-04 14:04:14.384269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:27.516 [2024-11-04 14:04:14.384279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:27.516 [2024-11-04 14:04:14.384295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.516 [2024-11-04 14:04:14.384308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:27.516 [2024-11-04 14:04:14.384324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:30:27.516 [2024-11-04 14:04:14.384336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.516 [2024-11-04 14:04:14.408214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.516 [2024-11-04 14:04:14.408283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:27.517 [2024-11-04 14:04:14.408308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.743 ms 00:30:27.517 [2024-11-04 14:04:14.408321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.517 [2024-11-04 14:04:14.408984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.517 [2024-11-04 14:04:14.409006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:27.517 [2024-11-04 14:04:14.409023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:30:27.517 [2024-11-04 14:04:14.409044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.807 [2024-11-04 14:04:14.485489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.807 [2024-11-04 14:04:14.485563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.807 [2024-11-04 14:04:14.485599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.807 [2024-11-04 14:04:14.485612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.807 [2024-11-04 14:04:14.485706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.807 [2024-11-04 14:04:14.485719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.807 [2024-11-04 14:04:14.485734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.807 [2024-11-04 14:04:14.485746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.807 [2024-11-04 14:04:14.485909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.807 [2024-11-04 14:04:14.485925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.807 [2024-11-04 14:04:14.485944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.807 [2024-11-04 14:04:14.485956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.807 [2024-11-04 14:04:14.485985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.807 [2024-11-04 14:04:14.485998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.807 [2024-11-04 14:04:14.486013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.807 [2024-11-04 14:04:14.486024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.807 [2024-11-04 14:04:14.627882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.807 [2024-11-04 14:04:14.627977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.807 [2024-11-04 14:04:14.627997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.807 [2024-11-04 14:04:14.628009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.745710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.745777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.087 [2024-11-04 14:04:14.745797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.745809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.745949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.745962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.087 [2024-11-04 14:04:14.745978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.745993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.746074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.087 [2024-11-04 14:04:14.746088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.746100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.746256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.087 [2024-11-04 14:04:14.746271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.746282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.746346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:28.087 [2024-11-04 14:04:14.746360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.746370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.746429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.087 [2024-11-04 14:04:14.746443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.746453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.087 [2024-11-04 14:04:14.746540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.087 [2024-11-04 14:04:14.746555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.087 [2024-11-04 14:04:14.746597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.087 [2024-11-04 14:04:14.746749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 643.495 ms, result 0 00:30:28.087 true 00:30:28.087 14:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79128 00:30:28.087 14:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79128 00:30:28.087 14:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:28.087 [2024-11-04 14:04:14.878019] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:30:28.087 [2024-11-04 14:04:14.878440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79977 ] 00:30:28.346 [2024-11-04 14:04:15.059337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.346 [2024-11-04 14:04:15.193881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.721  [2024-11-04T14:04:17.580Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-04T14:04:18.955Z] Copying: 350/1024 [MB] (177 MBps) [2024-11-04T14:04:19.888Z] Copying: 538/1024 [MB] (187 MBps) [2024-11-04T14:04:20.821Z] Copying: 729/1024 [MB] (191 MBps) [2024-11-04T14:04:21.386Z] Copying: 913/1024 [MB] (183 MBps) [2024-11-04T14:04:22.760Z] Copying: 1024/1024 [MB] (average 181 MBps) 00:30:35.838 00:30:35.838 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79128 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:35.839 14:04:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:35.839 [2024-11-04 14:04:22.563958] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:30:35.839 [2024-11-04 14:04:22.564675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80054 ] 00:30:36.097 [2024-11-04 14:04:22.762081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.097 [2024-11-04 14:04:22.888381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.663 [2024-11-04 14:04:23.300944] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:36.663 [2024-11-04 14:04:23.301028] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:36.663 [2024-11-04 14:04:23.369105] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:36.663 [2024-11-04 14:04:23.369418] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:36.663 [2024-11-04 14:04:23.369600] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:36.922 [2024-11-04 14:04:23.594902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.594961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:36.922 [2024-11-04 14:04:23.594977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:36.922 [2024-11-04 14:04:23.594988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.595048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.595061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:36.922 [2024-11-04 14:04:23.595073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:36.922 [2024-11-04 14:04:23.595082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.595105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:36.922 [2024-11-04 14:04:23.596157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:36.922 [2024-11-04 14:04:23.596188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.596200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:36.922 [2024-11-04 14:04:23.596213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:30:36.922 [2024-11-04 14:04:23.596224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.597846] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:36.922 [2024-11-04 14:04:23.620307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.620364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:36.922 [2024-11-04 14:04:23.620383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.461 ms 00:30:36.922 [2024-11-04 14:04:23.620396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.620478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.620504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:36.922 [2024-11-04 14:04:23.620516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:36.922 [2024-11-04 14:04:23.620527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.627938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.627982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:36.922 [2024-11-04 14:04:23.627997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.299 ms 00:30:36.922 [2024-11-04 14:04:23.628008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.628103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.628118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:36.922 [2024-11-04 14:04:23.628131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:36.922 [2024-11-04 14:04:23.628142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.628197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.628215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:36.922 [2024-11-04 14:04:23.628227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:36.922 [2024-11-04 14:04:23.628238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.628267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:36.922 [2024-11-04 14:04:23.633554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.633618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:36.922 [2024-11-04 14:04:23.633632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.294 ms 00:30:36.922 [2024-11-04 14:04:23.633661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.633709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.633721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:36.922 [2024-11-04 14:04:23.633733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:36.922 [2024-11-04 14:04:23.633745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.922 [2024-11-04 14:04:23.633808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:36.922 [2024-11-04 14:04:23.633838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:36.922 [2024-11-04 14:04:23.633888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:36.922 [2024-11-04 14:04:23.633907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:36.922 [2024-11-04 14:04:23.633999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:36.922 [2024-11-04 14:04:23.634012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:36.922 [2024-11-04 14:04:23.634025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:36.922 [2024-11-04 14:04:23.634039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:36.922 [2024-11-04 14:04:23.634054] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:36.922 [2024-11-04 14:04:23.634066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:36.922 [2024-11-04 14:04:23.634076] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:36.922 [2024-11-04 14:04:23.634086] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:36.922 [2024-11-04 14:04:23.634096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:36.922 [2024-11-04 14:04:23.634107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.922 [2024-11-04 14:04:23.634118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:36.923 [2024-11-04 14:04:23.634128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:30:36.923 [2024-11-04 14:04:23.634139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.634216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.634231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:36.923 [2024-11-04 14:04:23.634242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:36.923 [2024-11-04 14:04:23.634252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.634348] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:36.923 [2024-11-04 14:04:23.634362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:36.923 [2024-11-04 14:04:23.634373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:36.923 [2024-11-04 14:04:23.634404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:36.923 [2024-11-04 14:04:23.634437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:36.923 [2024-11-04 14:04:23.634456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:36.923 [2024-11-04 14:04:23.634476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:36.923 [2024-11-04 14:04:23.634485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:36.923 [2024-11-04 14:04:23.634495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:36.923 [2024-11-04 14:04:23.634505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:36.923 [2024-11-04 14:04:23.634515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:36.923 [2024-11-04 14:04:23.634534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:36.923 [2024-11-04 14:04:23.634562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:36.923 [2024-11-04 14:04:23.634590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:36.923 [2024-11-04 14:04:23.634885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:36.923 [2024-11-04 14:04:23.634917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:36.923 [2024-11-04 14:04:23.634947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:36.923 [2024-11-04 14:04:23.634976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:36.923 [2024-11-04 14:04:23.635006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:36.923 [2024-11-04 14:04:23.635036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:36.923 [2024-11-04 14:04:23.635124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:36.923 [2024-11-04 14:04:23.635159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:36.923 [2024-11-04 14:04:23.635190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:36.923 [2024-11-04 14:04:23.635220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:36.923 [2024-11-04 14:04:23.635250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:36.923 [2024-11-04 14:04:23.635279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:36.923 [2024-11-04 14:04:23.635308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:36.923 [2024-11-04 14:04:23.635454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.635484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:36.923 [2024-11-04 14:04:23.635514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:36.923 [2024-11-04 14:04:23.635543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.635594] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:36.923 [2024-11-04 14:04:23.635671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:36.923 [2024-11-04 14:04:23.635765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:36.923 [2024-11-04 14:04:23.635849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:36.923 [2024-11-04 14:04:23.635906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:36.923 [2024-11-04 14:04:23.635940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:36.923 [2024-11-04 14:04:23.635972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:36.923 [2024-11-04 14:04:23.636145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:36.923 [2024-11-04 14:04:23.636161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:36.923 [2024-11-04 14:04:23.636173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:36.923 [2024-11-04 14:04:23.636186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:36.923 [2024-11-04 14:04:23.636201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:36.923 [2024-11-04 14:04:23.636226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:36.923 [2024-11-04 14:04:23.636238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:36.923 [2024-11-04 14:04:23.636250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:36.923 [2024-11-04 14:04:23.636261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:36.923 [2024-11-04 14:04:23.636273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:36.923 [2024-11-04 14:04:23.636284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:36.923 [2024-11-04 14:04:23.636296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:36.923 [2024-11-04 14:04:23.636308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:36.923 [2024-11-04 14:04:23.636319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:36.923 [2024-11-04 14:04:23.636376] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:36.923 [2024-11-04 14:04:23.636388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:36.923 [2024-11-04 14:04:23.636412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:36.923 [2024-11-04 14:04:23.636424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:36.923 [2024-11-04 14:04:23.636435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:36.923 [2024-11-04 14:04:23.636450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.636462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:36.923 [2024-11-04 14:04:23.636474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:30:36.923 [2024-11-04 14:04:23.636487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.679231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.679292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:36.923 [2024-11-04 14:04:23.679309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.674 ms 00:30:36.923 [2024-11-04 14:04:23.679320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.679423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.679456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:36.923 [2024-11-04 14:04:23.679469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:36.923 [2024-11-04 14:04:23.679480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.742672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.742736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:36.923 [2024-11-04 14:04:23.742770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.103 ms 00:30:36.923 [2024-11-04 14:04:23.742788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.923 [2024-11-04 14:04:23.742858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.923 [2024-11-04 14:04:23.742871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:36.923 [2024-11-04 14:04:23.742884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:36.924 [2024-11-04 14:04:23.742896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.743427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.743444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:36.924 [2024-11-04 14:04:23.743457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:30:36.924 [2024-11-04 14:04:23.743470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.743641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.743674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:36.924 [2024-11-04 14:04:23.743688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:30:36.924 [2024-11-04 14:04:23.743701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.764506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.764758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:36.924 [2024-11-04 14:04:23.764796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.778 ms 00:30:36.924 [2024-11-04 14:04:23.764809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.785080] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:36.924 [2024-11-04 14:04:23.785128] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:36.924 [2024-11-04 14:04:23.785145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.785156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:36.924 [2024-11-04 14:04:23.785169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.184 ms 00:30:36.924 [2024-11-04 14:04:23.785179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.815993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.816174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:36.924 [2024-11-04 14:04:23.816210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.764 ms 00:30:36.924 [2024-11-04 14:04:23.816222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.924 [2024-11-04 14:04:23.835204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.924 [2024-11-04 14:04:23.835403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:36.924 [2024-11-04 14:04:23.835428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.898 ms 00:30:36.924 [2024-11-04 14:04:23.835440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.855005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.855053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:37.182 [2024-11-04 14:04:23.855067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.419 ms 00:30:37.182 [2024-11-04 14:04:23.855078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.855978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.856010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:37.182 [2024-11-04 14:04:23.856024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:30:37.182 [2024-11-04 14:04:23.856035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.947240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.947325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:37.182 [2024-11-04 14:04:23.947343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.179 ms 00:30:37.182 [2024-11-04 14:04:23.947355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.959793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:37.182 [2024-11-04 14:04:23.963043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.963221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:37.182 [2024-11-04 14:04:23.963246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:30:37.182 [2024-11-04 14:04:23.963258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.963374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.963387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:37.182 [2024-11-04 14:04:23.963399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:37.182 [2024-11-04 14:04:23.963409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.963505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.963518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:37.182 [2024-11-04 14:04:23.963530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:37.182 [2024-11-04 14:04:23.963540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.963583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.963600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:37.182 [2024-11-04 14:04:23.963611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:37.182 [2024-11-04 14:04:23.963621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:23.963655] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:37.182 [2024-11-04 14:04:23.963667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:23.963678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:37.182 [2024-11-04 14:04:23.963688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:37.182 [2024-11-04 14:04:23.963698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:24.001438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:24.001483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:37.182 [2024-11-04 14:04:24.001499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.713 ms 00:30:37.182 [2024-11-04 14:04:24.001510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:24.001606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.182 [2024-11-04 14:04:24.001621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:37.182 [2024-11-04 14:04:24.001633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:37.182 [2024-11-04 14:04:24.001644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.182 [2024-11-04 14:04:24.002937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.497 ms, result 0 00:30:38.119  [2024-11-04T14:04:26.418Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-04T14:04:27.355Z] Copying: 67/1024 [MB] (33 MBps) [2024-11-04T14:04:28.291Z] Copying: 100/1024 [MB] (32 MBps) [2024-11-04T14:04:29.229Z] Copying: 130/1024 [MB] (30 MBps) [2024-11-04T14:04:30.164Z] Copying: 164/1024 [MB] (33 MBps) [2024-11-04T14:04:31.134Z] Copying: 198/1024 [MB] (33 MBps) [2024-11-04T14:04:32.071Z] Copying: 232/1024 [MB] (33 MBps) [2024-11-04T14:04:33.449Z] Copying: 266/1024 [MB] (34 MBps) [2024-11-04T14:04:34.016Z] Copying: 300/1024 [MB] (34 MBps) [2024-11-04T14:04:35.395Z] Copying: 334/1024 [MB] (33 MBps) [2024-11-04T14:04:36.332Z] Copying: 367/1024 [MB] (33 MBps) [2024-11-04T14:04:37.267Z] Copying: 401/1024 [MB] (33 MBps) [2024-11-04T14:04:38.206Z] Copying: 432/1024 [MB] (31 MBps) [2024-11-04T14:04:39.169Z] Copying: 466/1024 [MB] (33 MBps) [2024-11-04T14:04:40.107Z] Copying: 499/1024 [MB] (32 MBps) [2024-11-04T14:04:41.044Z] Copying: 532/1024 [MB] (32 MBps) [2024-11-04T14:04:42.421Z] Copying: 563/1024 [MB] (31 MBps) [2024-11-04T14:04:43.356Z] Copying: 594/1024 [MB] (31 MBps) [2024-11-04T14:04:44.320Z] Copying: 627/1024 [MB] (32 MBps) [2024-11-04T14:04:45.256Z] Copying: 660/1024 [MB] (32 MBps) [2024-11-04T14:04:46.204Z] Copying: 692/1024 [MB] (32 MBps) [2024-11-04T14:04:47.138Z] Copying: 725/1024 [MB] (32 MBps) [2024-11-04T14:04:48.074Z] Copying: 759/1024 [MB] (33 MBps) [2024-11-04T14:04:49.449Z] Copying: 792/1024 [MB] (33 MBps) [2024-11-04T14:04:50.020Z] Copying: 824/1024 [MB] (32 MBps) [2024-11-04T14:04:51.395Z] Copying: 858/1024 [MB] (33 MBps) [2024-11-04T14:04:52.333Z] Copying: 892/1024 [MB] (34 MBps) [2024-11-04T14:04:53.267Z] Copying: 930/1024 [MB] (37 MBps) [2024-11-04T14:04:54.203Z] Copying: 967/1024 [MB] (37 MBps) [2024-11-04T14:04:55.138Z] Copying: 1004/1024 [MB] (36 MBps) [2024-11-04T14:04:55.705Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-04T14:04:55.705Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-04 14:04:55.676721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.783 [2024-11-04 14:04:55.676824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:08.783 [2024-11-04 14:04:55.676847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:08.783 [2024-11-04 14:04:55.676861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.783 [2024-11-04 14:04:55.679021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:08.783 [2024-11-04 14:04:55.687142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.783 [2024-11-04 14:04:55.687207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:08.783 [2024-11-04 14:04:55.687236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.057 ms 00:31:08.783 [2024-11-04 14:04:55.687259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.783 [2024-11-04 14:04:55.701293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.783 [2024-11-04 14:04:55.701348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:08.783 [2024-11-04 14:04:55.701367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.800 ms 00:31:08.783 [2024-11-04 14:04:55.701381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.725048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.725116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:09.042 [2024-11-04 14:04:55.725135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.641 ms 00:31:09.042 [2024-11-04 14:04:55.725150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.731283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.731336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:09.042 [2024-11-04 14:04:55.731352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:31:09.042 [2024-11-04 14:04:55.731364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.776772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.776868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:09.042 [2024-11-04 14:04:55.776887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.350 ms 00:31:09.042 [2024-11-04 14:04:55.776900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.802452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.802523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:09.042 [2024-11-04 14:04:55.802543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.480 ms 00:31:09.042 [2024-11-04 14:04:55.802556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.870750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.870836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:09.042 [2024-11-04 14:04:55.870868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.102 ms 00:31:09.042 [2024-11-04 14:04:55.870896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.042 [2024-11-04 14:04:55.918474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.042 [2024-11-04 14:04:55.918549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:09.042 [2024-11-04 14:04:55.918577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.550 ms 00:31:09.042 [2024-11-04 14:04:55.918590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.302 [2024-11-04 14:04:55.965276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.302 [2024-11-04 14:04:55.965331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:09.302 [2024-11-04 14:04:55.965351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.603 ms 00:31:09.302 [2024-11-04 14:04:55.965364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.302 [2024-11-04 14:04:56.011559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.302 [2024-11-04 14:04:56.011638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:09.302 [2024-11-04 14:04:56.011657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.110 ms 00:31:09.302 [2024-11-04 14:04:56.011669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.302 [2024-11-04 14:04:56.058169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.302 [2024-11-04 14:04:56.058245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:09.302 [2024-11-04 14:04:56.058265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.322 ms 00:31:09.302 [2024-11-04 14:04:56.058277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.302 [2024-11-04 14:04:56.058357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:09.302 [2024-11-04 14:04:56.058379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 124160 / 261120 wr_cnt: 1 state: open 00:31:09.302 [2024-11-04 14:04:56.058395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:09.302 [2024-11-04 14:04:56.058486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.058997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:09.303 [2024-11-04 14:04:56.059683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:09.304 [2024-11-04 14:04:56.059696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:09.304 [2024-11-04 14:04:56.059708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:09.304 [2024-11-04 14:04:56.059731] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:09.304 [2024-11-04 14:04:56.059743] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d83b1a07-9ea8-46fc-a450-a77fb1d29098 00:31:09.304 [2024-11-04 14:04:56.059756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 124160 00:31:09.304 [2024-11-04 14:04:56.059789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125120 00:31:09.304 [2024-11-04 14:04:56.059814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 124160 00:31:09.304 [2024-11-04 14:04:56.059827] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:31:09.304 [2024-11-04 14:04:56.059839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:09.304 [2024-11-04 14:04:56.059852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:09.304 [2024-11-04 14:04:56.059864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:09.304 [2024-11-04 14:04:56.059875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:09.304 [2024-11-04 14:04:56.059886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:09.304 [2024-11-04 14:04:56.059898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.304 [2024-11-04 14:04:56.059911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:09.304 [2024-11-04 14:04:56.059924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:31:09.304 [2024-11-04 14:04:56.059936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.084882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.304 [2024-11-04 14:04:56.084946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:09.304 [2024-11-04 14:04:56.084964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.883 ms 00:31:09.304 [2024-11-04 14:04:56.084977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.085616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.304 [2024-11-04 14:04:56.085638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:09.304 [2024-11-04 14:04:56.085652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:31:09.304 [2024-11-04 14:04:56.085665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.148369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.304 [2024-11-04 14:04:56.148435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.304 [2024-11-04 14:04:56.148457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.304 [2024-11-04 14:04:56.148474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.148590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.304 [2024-11-04 14:04:56.148608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.304 [2024-11-04 14:04:56.148624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.304 [2024-11-04 14:04:56.148640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.148771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.304 [2024-11-04 14:04:56.148802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.304 [2024-11-04 14:04:56.148817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.304 [2024-11-04 14:04:56.148829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.304 [2024-11-04 14:04:56.148850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.304 [2024-11-04 14:04:56.148863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.304 [2024-11-04 14:04:56.148876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.304 [2024-11-04 14:04:56.148888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.303566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.303644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.563 [2024-11-04 14:04:56.303664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.303678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.433344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.433404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.563 [2024-11-04 14:04:56.433421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.433435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.433550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.433577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.563 [2024-11-04 14:04:56.433592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.433605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.433659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.433674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.563 [2024-11-04 14:04:56.433686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.433698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.433817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.433843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.563 [2024-11-04 14:04:56.433857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.433869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.433916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.433931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:09.563 [2024-11-04 14:04:56.433944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.433956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.434000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.434018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.563 [2024-11-04 14:04:56.434030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.434042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.434091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.563 [2024-11-04 14:04:56.434105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.563 [2024-11-04 14:04:56.434118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.563 [2024-11-04 14:04:56.434130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.563 [2024-11-04 14:04:56.434272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 758.435 ms, result 0 00:31:12.097 00:31:12.097 00:31:12.097 14:04:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:13.473 14:05:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:13.752 [2024-11-04 14:05:00.472685] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:31:13.752 [2024-11-04 14:05:00.472898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80434 ] 00:31:13.752 [2024-11-04 14:05:00.665863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.013 [2024-11-04 14:05:00.783635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.273 [2024-11-04 14:05:01.174147] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:14.273 [2024-11-04 14:05:01.174227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:14.532 [2024-11-04 14:05:01.352839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.352908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:14.532 [2024-11-04 14:05:01.352933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:14.532 [2024-11-04 14:05:01.352950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.353030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.353054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.532 [2024-11-04 14:05:01.353071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:14.532 [2024-11-04 14:05:01.353087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.353120] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:14.532 [2024-11-04 14:05:01.354773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:14.532 [2024-11-04 14:05:01.354819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.354837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.532 [2024-11-04 14:05:01.354854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:31:14.532 [2024-11-04 14:05:01.354870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.356791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:14.532 [2024-11-04 14:05:01.387275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.387348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:14.532 [2024-11-04 14:05:01.387372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.498 ms 00:31:14.532 [2024-11-04 14:05:01.387390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.387491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.387511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:14.532 [2024-11-04 14:05:01.387528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:14.532 [2024-11-04 14:05:01.387544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.395251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.395296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.532 [2024-11-04 14:05:01.395317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.575 ms 00:31:14.532 [2024-11-04 14:05:01.395339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.395460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.395482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.532 [2024-11-04 14:05:01.395498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:31:14.532 [2024-11-04 14:05:01.395514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.395602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.395623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:14.532 [2024-11-04 14:05:01.395649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:14.532 [2024-11-04 14:05:01.395676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.395736] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:14.532 [2024-11-04 14:05:01.403049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.403093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.532 [2024-11-04 14:05:01.403117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.322 ms 00:31:14.532 [2024-11-04 14:05:01.403133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.403186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.403205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:14.532 [2024-11-04 14:05:01.403221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:14.532 [2024-11-04 14:05:01.403237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.403317] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:14.532 [2024-11-04 14:05:01.403351] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:14.532 [2024-11-04 14:05:01.403407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:14.532 [2024-11-04 14:05:01.403439] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:14.532 [2024-11-04 14:05:01.403598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:14.532 [2024-11-04 14:05:01.403620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:14.532 [2024-11-04 14:05:01.403642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:14.532 [2024-11-04 14:05:01.403662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:14.532 [2024-11-04 14:05:01.403682] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:14.532 [2024-11-04 14:05:01.403699] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:14.532 [2024-11-04 14:05:01.403715] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:14.532 [2024-11-04 14:05:01.403729] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:14.532 [2024-11-04 14:05:01.403751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:14.532 [2024-11-04 14:05:01.403767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.403784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:14.532 [2024-11-04 14:05:01.403799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:31:14.532 [2024-11-04 14:05:01.403815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.403937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.532 [2024-11-04 14:05:01.403954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:14.532 [2024-11-04 14:05:01.403970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:31:14.532 [2024-11-04 14:05:01.403985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.532 [2024-11-04 14:05:01.404139] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:14.532 [2024-11-04 14:05:01.404162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:14.532 [2024-11-04 14:05:01.404179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:14.532 [2024-11-04 14:05:01.404195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:14.532 [2024-11-04 14:05:01.404226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:14.532 [2024-11-04 14:05:01.404258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:14.532 [2024-11-04 14:05:01.404273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:14.532 [2024-11-04 14:05:01.404303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:14.532 [2024-11-04 14:05:01.404317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:14.532 [2024-11-04 14:05:01.404332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:14.532 [2024-11-04 14:05:01.404347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:14.532 [2024-11-04 14:05:01.404362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:14.532 [2024-11-04 14:05:01.404393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:14.532 [2024-11-04 14:05:01.404424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:14.532 [2024-11-04 14:05:01.404439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:14.532 [2024-11-04 14:05:01.404469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.532 [2024-11-04 14:05:01.404498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:14.532 [2024-11-04 14:05:01.404513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:14.532 [2024-11-04 14:05:01.404527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.532 [2024-11-04 14:05:01.404541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:14.533 [2024-11-04 14:05:01.404556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.533 [2024-11-04 14:05:01.404601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:14.533 [2024-11-04 14:05:01.404616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:14.533 [2024-11-04 14:05:01.404645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:14.533 [2024-11-04 14:05:01.404660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:14.533 [2024-11-04 14:05:01.404689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:14.533 [2024-11-04 14:05:01.404704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:14.533 [2024-11-04 14:05:01.404718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:14.533 [2024-11-04 14:05:01.404732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:14.533 [2024-11-04 14:05:01.404748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:14.533 [2024-11-04 14:05:01.404762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:14.533 [2024-11-04 14:05:01.404803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:14.533 [2024-11-04 14:05:01.404818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404833] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:14.533 [2024-11-04 14:05:01.404849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:14.533 [2024-11-04 14:05:01.404865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:14.533 [2024-11-04 14:05:01.404881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:14.533 [2024-11-04 14:05:01.404897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:14.533 [2024-11-04 14:05:01.404913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:14.533 [2024-11-04 14:05:01.404928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:14.533 [2024-11-04 14:05:01.404943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:14.533 [2024-11-04 14:05:01.404958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:14.533 [2024-11-04 14:05:01.404972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:14.533 [2024-11-04 14:05:01.404990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:14.533 [2024-11-04 14:05:01.405009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:14.533 [2024-11-04 14:05:01.405044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:14.533 [2024-11-04 14:05:01.405061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:14.533 [2024-11-04 14:05:01.405078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:14.533 [2024-11-04 14:05:01.405094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:14.533 [2024-11-04 14:05:01.405111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:14.533 [2024-11-04 14:05:01.405127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:14.533 [2024-11-04 14:05:01.405144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:14.533 [2024-11-04 14:05:01.405160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:14.533 [2024-11-04 14:05:01.405177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:14.533 [2024-11-04 14:05:01.405259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:14.533 [2024-11-04 14:05:01.405283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:14.533 [2024-11-04 14:05:01.405318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:14.533 [2024-11-04 14:05:01.405335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:14.533 [2024-11-04 14:05:01.405353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:14.533 [2024-11-04 14:05:01.405371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.533 [2024-11-04 14:05:01.405387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:14.533 [2024-11-04 14:05:01.405403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:31:14.533 [2024-11-04 14:05:01.405419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.533 [2024-11-04 14:05:01.449050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.533 [2024-11-04 14:05:01.449101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.533 [2024-11-04 14:05:01.449117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.558 ms 00:31:14.533 [2024-11-04 14:05:01.449133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.533 [2024-11-04 14:05:01.449229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.533 [2024-11-04 14:05:01.449241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:14.533 [2024-11-04 14:05:01.449251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:14.533 [2024-11-04 14:05:01.449261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.507085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.507133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:14.792 [2024-11-04 14:05:01.507165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.720 ms 00:31:14.792 [2024-11-04 14:05:01.507177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.507231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.507242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:14.792 [2024-11-04 14:05:01.507258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:14.792 [2024-11-04 14:05:01.507269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.507817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.507842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:14.792 [2024-11-04 14:05:01.507854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:31:14.792 [2024-11-04 14:05:01.507864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.507994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.508008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:14.792 [2024-11-04 14:05:01.508019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:14.792 [2024-11-04 14:05:01.508033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.528906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.528948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:14.792 [2024-11-04 14:05:01.528970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.848 ms 00:31:14.792 [2024-11-04 14:05:01.528985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.548629] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:14.792 [2024-11-04 14:05:01.548670] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:14.792 [2024-11-04 14:05:01.548687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.548698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:14.792 [2024-11-04 14:05:01.548711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.569 ms 00:31:14.792 [2024-11-04 14:05:01.548722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.579674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.579721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:14.792 [2024-11-04 14:05:01.579736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.905 ms 00:31:14.792 [2024-11-04 14:05:01.579747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.598675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.598752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:14.792 [2024-11-04 14:05:01.598783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.878 ms 00:31:14.792 [2024-11-04 14:05:01.598794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.617388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.617425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:14.792 [2024-11-04 14:05:01.617455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.551 ms 00:31:14.792 [2024-11-04 14:05:01.617465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.618269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.618297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:14.792 [2024-11-04 14:05:01.618310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:31:14.792 [2024-11-04 14:05:01.618324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.792 [2024-11-04 14:05:01.709894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.792 [2024-11-04 14:05:01.709961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:14.792 [2024-11-04 14:05:01.709985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.544 ms 00:31:14.792 [2024-11-04 14:05:01.709996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.722251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.051 [2024-11-04 14:05:01.725724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.725761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.051 [2024-11-04 14:05:01.725779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.663 ms 00:31:15.051 [2024-11-04 14:05:01.725791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.725941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.725955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.051 [2024-11-04 14:05:01.725967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:15.051 [2024-11-04 14:05:01.725981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.727734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.727773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.051 [2024-11-04 14:05:01.727787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:31:15.051 [2024-11-04 14:05:01.727799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.727834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.727846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.051 [2024-11-04 14:05:01.727858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.051 [2024-11-04 14:05:01.727870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.727915] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.051 [2024-11-04 14:05:01.727940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.727958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.051 [2024-11-04 14:05:01.727980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:15.051 [2024-11-04 14:05:01.727992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.766054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.766094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.051 [2024-11-04 14:05:01.766109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.037 ms 00:31:15.051 [2024-11-04 14:05:01.766126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.766203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.051 [2024-11-04 14:05:01.766216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.051 [2024-11-04 14:05:01.766227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:15.051 [2024-11-04 14:05:01.766237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.051 [2024-11-04 14:05:01.767427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.071 ms, result 0 00:31:16.427  [2024-11-04T14:05:04.285Z] Copying: 1108/1048576 [kB] (1108 kBps) [2024-11-04T14:05:05.221Z] Copying: 8312/1048576 [kB] (7204 kBps) [2024-11-04T14:05:06.159Z] Copying: 47/1024 [MB] (39 MBps) [2024-11-04T14:05:07.094Z] Copying: 87/1024 [MB] (39 MBps) [2024-11-04T14:05:08.089Z] Copying: 125/1024 [MB] (38 MBps) [2024-11-04T14:05:09.025Z] Copying: 164/1024 [MB] (38 MBps) [2024-11-04T14:05:10.401Z] Copying: 201/1024 [MB] (36 MBps) [2024-11-04T14:05:11.336Z] Copying: 239/1024 [MB] (38 MBps) [2024-11-04T14:05:12.271Z] Copying: 278/1024 [MB] (38 MBps) [2024-11-04T14:05:13.206Z] Copying: 316/1024 [MB] (38 MBps) [2024-11-04T14:05:14.146Z] Copying: 355/1024 [MB] (39 MBps) [2024-11-04T14:05:15.082Z] Copying: 395/1024 [MB] (39 MBps) [2024-11-04T14:05:16.018Z] Copying: 433/1024 [MB] (38 MBps) [2024-11-04T14:05:17.010Z] Copying: 472/1024 [MB] (38 MBps) [2024-11-04T14:05:18.387Z] Copying: 510/1024 [MB] (38 MBps) [2024-11-04T14:05:19.323Z] Copying: 546/1024 [MB] (36 MBps) [2024-11-04T14:05:20.257Z] Copying: 585/1024 [MB] (38 MBps) [2024-11-04T14:05:21.249Z] Copying: 620/1024 [MB] (34 MBps) [2024-11-04T14:05:22.185Z] Copying: 658/1024 [MB] (38 MBps) [2024-11-04T14:05:23.121Z] Copying: 697/1024 [MB] (38 MBps) [2024-11-04T14:05:24.056Z] Copying: 736/1024 [MB] (39 MBps) [2024-11-04T14:05:25.432Z] Copying: 776/1024 [MB] (39 MBps) [2024-11-04T14:05:25.999Z] Copying: 814/1024 [MB] (38 MBps) [2024-11-04T14:05:27.374Z] Copying: 851/1024 [MB] (37 MBps) [2024-11-04T14:05:28.309Z] Copying: 888/1024 [MB] (37 MBps) [2024-11-04T14:05:29.242Z] Copying: 926/1024 [MB] (38 MBps) [2024-11-04T14:05:30.177Z] Copying: 965/1024 [MB] (38 MBps) [2024-11-04T14:05:30.743Z] Copying: 1004/1024 [MB] (39 MBps) [2024-11-04T14:05:31.002Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-04 14:05:30.787561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.788018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:44.080 [2024-11-04 14:05:30.788051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:44.080 [2024-11-04 14:05:30.788062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.788094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:44.080 [2024-11-04 14:05:30.793154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.793197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:44.080 [2024-11-04 14:05:30.793211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:31:44.080 [2024-11-04 14:05:30.793222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.793434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.793453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:44.080 [2024-11-04 14:05:30.793469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:31:44.080 [2024-11-04 14:05:30.793480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.804177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.804227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:44.080 [2024-11-04 14:05:30.804244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.678 ms 00:31:44.080 [2024-11-04 14:05:30.804258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.809671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.809707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:44.080 [2024-11-04 14:05:30.809719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.379 ms 00:31:44.080 [2024-11-04 14:05:30.809736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.846859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.846902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:44.080 [2024-11-04 14:05:30.846916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.059 ms 00:31:44.080 [2024-11-04 14:05:30.846927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.867206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.867249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:44.080 [2024-11-04 14:05:30.867263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.224 ms 00:31:44.080 [2024-11-04 14:05:30.867273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.869034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.869074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:44.080 [2024-11-04 14:05:30.869087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:31:44.080 [2024-11-04 14:05:30.869097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.905579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.905621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:44.080 [2024-11-04 14:05:30.905651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.440 ms 00:31:44.080 [2024-11-04 14:05:30.905662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.941543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.941588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:44.080 [2024-11-04 14:05:30.941630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.842 ms 00:31:44.080 [2024-11-04 14:05:30.941641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.080 [2024-11-04 14:05:30.977195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.080 [2024-11-04 14:05:30.977236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:44.080 [2024-11-04 14:05:30.977250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.516 ms 00:31:44.080 [2024-11-04 14:05:30.977260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.341 [2024-11-04 14:05:31.013429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.341 [2024-11-04 14:05:31.013471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:44.341 [2024-11-04 14:05:31.013484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.075 ms 00:31:44.341 [2024-11-04 14:05:31.013494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.341 [2024-11-04 14:05:31.013549] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:44.341 [2024-11-04 14:05:31.013577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:44.341 [2024-11-04 14:05:31.013592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:44.341 [2024-11-04 14:05:31.013604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.013993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:44.341 [2024-11-04 14:05:31.014488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:44.342 [2024-11-04 14:05:31.014671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:44.342 [2024-11-04 14:05:31.014681] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d83b1a07-9ea8-46fc-a450-a77fb1d29098 00:31:44.342 [2024-11-04 14:05:31.014693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:44.342 [2024-11-04 14:05:31.014702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 140480 00:31:44.342 [2024-11-04 14:05:31.014712] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 138496 00:31:44.342 [2024-11-04 14:05:31.014731] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0143 00:31:44.342 [2024-11-04 14:05:31.014741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:44.342 [2024-11-04 14:05:31.014751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:44.342 [2024-11-04 14:05:31.014761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:44.342 [2024-11-04 14:05:31.014781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:44.342 [2024-11-04 14:05:31.014790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:44.342 [2024-11-04 14:05:31.014800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.342 [2024-11-04 14:05:31.014812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:44.342 [2024-11-04 14:05:31.014822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:31:44.342 [2024-11-04 14:05:31.014832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.035195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.342 [2024-11-04 14:05:31.035238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:44.342 [2024-11-04 14:05:31.035251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.324 ms 00:31:44.342 [2024-11-04 14:05:31.035262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.035807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.342 [2024-11-04 14:05:31.035830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:44.342 [2024-11-04 14:05:31.035841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:31:44.342 [2024-11-04 14:05:31.035852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.087838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.342 [2024-11-04 14:05:31.087881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.342 [2024-11-04 14:05:31.087911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.342 [2024-11-04 14:05:31.087922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.087978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.342 [2024-11-04 14:05:31.087989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.342 [2024-11-04 14:05:31.087999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.342 [2024-11-04 14:05:31.088009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.088075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.342 [2024-11-04 14:05:31.088094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.342 [2024-11-04 14:05:31.088105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.342 [2024-11-04 14:05:31.088114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.088132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.342 [2024-11-04 14:05:31.088143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.342 [2024-11-04 14:05:31.088153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.342 [2024-11-04 14:05:31.088163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.342 [2024-11-04 14:05:31.215532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.342 [2024-11-04 14:05:31.215620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:44.342 [2024-11-04 14:05:31.215637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.342 [2024-11-04 14:05:31.215648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.319743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.319803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:44.601 [2024-11-04 14:05:31.319819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.319831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.319924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.319936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:44.601 [2024-11-04 14:05:31.319953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.319963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.320024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:44.601 [2024-11-04 14:05:31.320034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.320044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.320174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:44.601 [2024-11-04 14:05:31.320185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.320200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.320250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:44.601 [2024-11-04 14:05:31.320260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.320271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.320319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:44.601 [2024-11-04 14:05:31.320329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.320344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.601 [2024-11-04 14:05:31.320402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:44.601 [2024-11-04 14:05:31.320412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.601 [2024-11-04 14:05:31.320423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.601 [2024-11-04 14:05:31.320541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.945 ms, result 0 00:31:45.536 00:31:45.536 00:31:45.536 14:05:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:47.437 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:47.437 14:05:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:47.695 [2024-11-04 14:05:34.420457] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:31:47.695 [2024-11-04 14:05:34.421098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80771 ] 00:31:47.952 [2024-11-04 14:05:34.627898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.952 [2024-11-04 14:05:34.791235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.519 [2024-11-04 14:05:35.159608] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:48.519 [2024-11-04 14:05:35.159682] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:48.519 [2024-11-04 14:05:35.324359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.519 [2024-11-04 14:05:35.324416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:48.519 [2024-11-04 14:05:35.324441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:48.519 [2024-11-04 14:05:35.324452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.324519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.324532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:48.520 [2024-11-04 14:05:35.324546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:48.520 [2024-11-04 14:05:35.324556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.324596] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:48.520 [2024-11-04 14:05:35.325658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:48.520 [2024-11-04 14:05:35.325830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.325847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:48.520 [2024-11-04 14:05:35.325862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:31:48.520 [2024-11-04 14:05:35.325873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.327406] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:48.520 [2024-11-04 14:05:35.348146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.348298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:48.520 [2024-11-04 14:05:35.348322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.738 ms 00:31:48.520 [2024-11-04 14:05:35.348335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.348412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.348426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:48.520 [2024-11-04 14:05:35.348439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:48.520 [2024-11-04 14:05:35.348450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.355429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.355637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:48.520 [2024-11-04 14:05:35.355679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.896 ms 00:31:48.520 [2024-11-04 14:05:35.355693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.355803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.355819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:48.520 [2024-11-04 14:05:35.355832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:48.520 [2024-11-04 14:05:35.355845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.355901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.355915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:48.520 [2024-11-04 14:05:35.355927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:48.520 [2024-11-04 14:05:35.355939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.355969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:48.520 [2024-11-04 14:05:35.361504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.361548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:48.520 [2024-11-04 14:05:35.361562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.541 ms 00:31:48.520 [2024-11-04 14:05:35.361600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.361638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.361651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:48.520 [2024-11-04 14:05:35.361663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:48.520 [2024-11-04 14:05:35.361674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.361741] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:48.520 [2024-11-04 14:05:35.361767] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:48.520 [2024-11-04 14:05:35.361806] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:48.520 [2024-11-04 14:05:35.361829] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:48.520 [2024-11-04 14:05:35.361929] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:48.520 [2024-11-04 14:05:35.361944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:48.520 [2024-11-04 14:05:35.361958] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:48.520 [2024-11-04 14:05:35.361973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:48.520 [2024-11-04 14:05:35.361986] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:48.520 [2024-11-04 14:05:35.361998] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:48.520 [2024-11-04 14:05:35.362010] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:48.520 [2024-11-04 14:05:35.362021] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:48.520 [2024-11-04 14:05:35.362032] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:48.520 [2024-11-04 14:05:35.362048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.362058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:48.520 [2024-11-04 14:05:35.362070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:31:48.520 [2024-11-04 14:05:35.362081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.362165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.520 [2024-11-04 14:05:35.362177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:48.520 [2024-11-04 14:05:35.362188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:48.520 [2024-11-04 14:05:35.362199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.520 [2024-11-04 14:05:35.362305] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:48.520 [2024-11-04 14:05:35.362325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:48.520 [2024-11-04 14:05:35.362337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:48.520 [2024-11-04 14:05:35.362369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:48.520 [2024-11-04 14:05:35.362401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.520 [2024-11-04 14:05:35.362422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:48.520 [2024-11-04 14:05:35.362432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:48.520 [2024-11-04 14:05:35.362442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:48.520 [2024-11-04 14:05:35.362452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:48.520 [2024-11-04 14:05:35.362463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:48.520 [2024-11-04 14:05:35.362484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:48.520 [2024-11-04 14:05:35.362505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:48.520 [2024-11-04 14:05:35.362536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:48.520 [2024-11-04 14:05:35.362579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:48.520 [2024-11-04 14:05:35.362610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:48.520 [2024-11-04 14:05:35.362641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:48.520 [2024-11-04 14:05:35.362660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:48.520 [2024-11-04 14:05:35.362670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.520 [2024-11-04 14:05:35.362690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:48.520 [2024-11-04 14:05:35.362700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:48.520 [2024-11-04 14:05:35.362710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:48.520 [2024-11-04 14:05:35.362720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:48.520 [2024-11-04 14:05:35.362730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:48.520 [2024-11-04 14:05:35.362739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.520 [2024-11-04 14:05:35.362749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:48.520 [2024-11-04 14:05:35.362759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:48.520 [2024-11-04 14:05:35.362769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.521 [2024-11-04 14:05:35.362779] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:48.521 [2024-11-04 14:05:35.362789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:48.521 [2024-11-04 14:05:35.362800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:48.521 [2024-11-04 14:05:35.362811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:48.521 [2024-11-04 14:05:35.362822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:48.521 [2024-11-04 14:05:35.362833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:48.521 [2024-11-04 14:05:35.362843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:48.521 [2024-11-04 14:05:35.362854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:48.521 [2024-11-04 14:05:35.362864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:48.521 [2024-11-04 14:05:35.362874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:48.521 [2024-11-04 14:05:35.362886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:48.521 [2024-11-04 14:05:35.362900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.362912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:48.521 [2024-11-04 14:05:35.362924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:48.521 [2024-11-04 14:05:35.362935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:48.521 [2024-11-04 14:05:35.362946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:48.521 [2024-11-04 14:05:35.362957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:48.521 [2024-11-04 14:05:35.362969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:48.521 [2024-11-04 14:05:35.362980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:48.521 [2024-11-04 14:05:35.362992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:48.521 [2024-11-04 14:05:35.363003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:48.521 [2024-11-04 14:05:35.363014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:48.521 [2024-11-04 14:05:35.363070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:48.521 [2024-11-04 14:05:35.363086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:48.521 [2024-11-04 14:05:35.363109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:48.521 [2024-11-04 14:05:35.363121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:48.521 [2024-11-04 14:05:35.363132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:48.521 [2024-11-04 14:05:35.363143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.521 [2024-11-04 14:05:35.363155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:48.521 [2024-11-04 14:05:35.363166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:31:48.521 [2024-11-04 14:05:35.363179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.521 [2024-11-04 14:05:35.404768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.521 [2024-11-04 14:05:35.404830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:48.521 [2024-11-04 14:05:35.404847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.535 ms 00:31:48.521 [2024-11-04 14:05:35.404858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.521 [2024-11-04 14:05:35.404963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.521 [2024-11-04 14:05:35.404975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:48.521 [2024-11-04 14:05:35.404986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:48.521 [2024-11-04 14:05:35.404996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.468139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.468333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:48.779 [2024-11-04 14:05:35.468358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.060 ms 00:31:48.779 [2024-11-04 14:05:35.468371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.468435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.468448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:48.779 [2024-11-04 14:05:35.468461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:48.779 [2024-11-04 14:05:35.468478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.469021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.469039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:48.779 [2024-11-04 14:05:35.469052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:31:48.779 [2024-11-04 14:05:35.469063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.469193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.469208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:48.779 [2024-11-04 14:05:35.469220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:48.779 [2024-11-04 14:05:35.469237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.489169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.489215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:48.779 [2024-11-04 14:05:35.489236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.909 ms 00:31:48.779 [2024-11-04 14:05:35.489247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.510141] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:48.779 [2024-11-04 14:05:35.510202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:48.779 [2024-11-04 14:05:35.510220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.510231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:48.779 [2024-11-04 14:05:35.510244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.836 ms 00:31:48.779 [2024-11-04 14:05:35.510254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.541433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.541517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:48.779 [2024-11-04 14:05:35.541535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.131 ms 00:31:48.779 [2024-11-04 14:05:35.541546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.561128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.561304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:48.779 [2024-11-04 14:05:35.561326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.480 ms 00:31:48.779 [2024-11-04 14:05:35.561338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.580243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.580401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:48.779 [2024-11-04 14:05:35.580422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:31:48.779 [2024-11-04 14:05:35.580433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.581450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.581490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:48.779 [2024-11-04 14:05:35.581505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:31:48.779 [2024-11-04 14:05:35.581521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.675683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.675751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:48.779 [2024-11-04 14:05:35.675777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.133 ms 00:31:48.779 [2024-11-04 14:05:35.675790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.779 [2024-11-04 14:05:35.687775] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:48.779 [2024-11-04 14:05:35.690984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.779 [2024-11-04 14:05:35.691020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:48.779 [2024-11-04 14:05:35.691035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.110 ms 00:31:48.780 [2024-11-04 14:05:35.691046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.780 [2024-11-04 14:05:35.691150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.780 [2024-11-04 14:05:35.691164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:48.780 [2024-11-04 14:05:35.691177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:48.780 [2024-11-04 14:05:35.691191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.780 [2024-11-04 14:05:35.692084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.780 [2024-11-04 14:05:35.692111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:48.780 [2024-11-04 14:05:35.692123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:31:48.780 [2024-11-04 14:05:35.692133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.780 [2024-11-04 14:05:35.692161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.780 [2024-11-04 14:05:35.692172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:48.780 [2024-11-04 14:05:35.692183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:48.780 [2024-11-04 14:05:35.692193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.780 [2024-11-04 14:05:35.692228] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:48.780 [2024-11-04 14:05:35.692244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.780 [2024-11-04 14:05:35.692254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:48.780 [2024-11-04 14:05:35.692265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:48.780 [2024-11-04 14:05:35.692275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.038 [2024-11-04 14:05:35.730542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.038 [2024-11-04 14:05:35.730617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:49.038 [2024-11-04 14:05:35.730634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.246 ms 00:31:49.038 [2024-11-04 14:05:35.730654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.038 [2024-11-04 14:05:35.730746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.039 [2024-11-04 14:05:35.730760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:49.039 [2024-11-04 14:05:35.730771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:49.039 [2024-11-04 14:05:35.730782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.039 [2024-11-04 14:05:35.732067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.157 ms, result 0 00:31:50.413  [2024-11-04T14:05:38.270Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-04T14:05:39.204Z] Copying: 68/1024 [MB] (34 MBps) [2024-11-04T14:05:40.141Z] Copying: 101/1024 [MB] (33 MBps) [2024-11-04T14:05:41.107Z] Copying: 134/1024 [MB] (32 MBps) [2024-11-04T14:05:42.041Z] Copying: 166/1024 [MB] (32 MBps) [2024-11-04T14:05:42.973Z] Copying: 198/1024 [MB] (32 MBps) [2024-11-04T14:05:44.348Z] Copying: 231/1024 [MB] (32 MBps) [2024-11-04T14:05:45.283Z] Copying: 263/1024 [MB] (32 MBps) [2024-11-04T14:05:46.216Z] Copying: 295/1024 [MB] (31 MBps) [2024-11-04T14:05:47.150Z] Copying: 328/1024 [MB] (32 MBps) [2024-11-04T14:05:48.084Z] Copying: 360/1024 [MB] (32 MBps) [2024-11-04T14:05:49.040Z] Copying: 393/1024 [MB] (32 MBps) [2024-11-04T14:05:49.973Z] Copying: 426/1024 [MB] (33 MBps) [2024-11-04T14:05:51.349Z] Copying: 459/1024 [MB] (33 MBps) [2024-11-04T14:05:52.283Z] Copying: 492/1024 [MB] (33 MBps) [2024-11-04T14:05:53.218Z] Copying: 525/1024 [MB] (33 MBps) [2024-11-04T14:05:54.154Z] Copying: 558/1024 [MB] (32 MBps) [2024-11-04T14:05:55.090Z] Copying: 590/1024 [MB] (32 MBps) [2024-11-04T14:05:56.025Z] Copying: 624/1024 [MB] (33 MBps) [2024-11-04T14:05:56.961Z] Copying: 657/1024 [MB] (33 MBps) [2024-11-04T14:05:58.338Z] Copying: 690/1024 [MB] (32 MBps) [2024-11-04T14:05:59.283Z] Copying: 723/1024 [MB] (32 MBps) [2024-11-04T14:06:00.227Z] Copying: 756/1024 [MB] (33 MBps) [2024-11-04T14:06:01.161Z] Copying: 788/1024 [MB] (32 MBps) [2024-11-04T14:06:02.095Z] Copying: 822/1024 [MB] (33 MBps) [2024-11-04T14:06:03.029Z] Copying: 854/1024 [MB] (32 MBps) [2024-11-04T14:06:04.021Z] Copying: 883/1024 [MB] (28 MBps) [2024-11-04T14:06:05.395Z] Copying: 915/1024 [MB] (31 MBps) [2024-11-04T14:06:05.961Z] Copying: 948/1024 [MB] (32 MBps) [2024-11-04T14:06:07.335Z] Copying: 980/1024 [MB] (32 MBps) [2024-11-04T14:06:07.335Z] Copying: 1013/1024 [MB] (32 MBps) [2024-11-04T14:06:07.594Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-04 14:06:07.524712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.525116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:20.672 [2024-11-04 14:06:07.525163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:20.672 [2024-11-04 14:06:07.525185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.672 [2024-11-04 14:06:07.525272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:20.672 [2024-11-04 14:06:07.531351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.531407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:20.672 [2024-11-04 14:06:07.531432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.039 ms 00:32:20.672 [2024-11-04 14:06:07.531444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.672 [2024-11-04 14:06:07.531726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.531744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:20.672 [2024-11-04 14:06:07.531757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:32:20.672 [2024-11-04 14:06:07.531768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.672 [2024-11-04 14:06:07.535639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.535699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:20.672 [2024-11-04 14:06:07.535715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.853 ms 00:32:20.672 [2024-11-04 14:06:07.535727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.672 [2024-11-04 14:06:07.541528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.541588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:20.672 [2024-11-04 14:06:07.541603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:32:20.672 [2024-11-04 14:06:07.541614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.672 [2024-11-04 14:06:07.582957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.672 [2024-11-04 14:06:07.583242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:20.672 [2024-11-04 14:06:07.583267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.253 ms 00:32:20.672 [2024-11-04 14:06:07.583278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.605807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.606051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:20.931 [2024-11-04 14:06:07.606078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.412 ms 00:32:20.931 [2024-11-04 14:06:07.606090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.607811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.607865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:20.931 [2024-11-04 14:06:07.607880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:32:20.931 [2024-11-04 14:06:07.607891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.646456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.646533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:20.931 [2024-11-04 14:06:07.646550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.543 ms 00:32:20.931 [2024-11-04 14:06:07.646561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.685524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.685615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:20.931 [2024-11-04 14:06:07.685631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.885 ms 00:32:20.931 [2024-11-04 14:06:07.685642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.723665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.723729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:20.931 [2024-11-04 14:06:07.723745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.955 ms 00:32:20.931 [2024-11-04 14:06:07.723756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.761532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.931 [2024-11-04 14:06:07.761808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:20.931 [2024-11-04 14:06:07.761833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.658 ms 00:32:20.931 [2024-11-04 14:06:07.761844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.931 [2024-11-04 14:06:07.761949] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:20.931 [2024-11-04 14:06:07.761981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:20.931 [2024-11-04 14:06:07.762006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:20.931 [2024-11-04 14:06:07.762019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:20.931 [2024-11-04 14:06:07.762246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.762998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:20.932 [2024-11-04 14:06:07.763115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:20.932 [2024-11-04 14:06:07.763129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d83b1a07-9ea8-46fc-a450-a77fb1d29098 00:32:20.932 [2024-11-04 14:06:07.763140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:20.932 [2024-11-04 14:06:07.763151] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:20.932 [2024-11-04 14:06:07.763160] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:20.932 [2024-11-04 14:06:07.763171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:20.932 [2024-11-04 14:06:07.763180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:20.932 [2024-11-04 14:06:07.763191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:20.932 [2024-11-04 14:06:07.763212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:20.932 [2024-11-04 14:06:07.763222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:20.932 [2024-11-04 14:06:07.763232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:20.932 [2024-11-04 14:06:07.763242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.932 [2024-11-04 14:06:07.763253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:20.932 [2024-11-04 14:06:07.763264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:32:20.932 [2024-11-04 14:06:07.763274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.932 [2024-11-04 14:06:07.784157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.932 [2024-11-04 14:06:07.784214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:20.933 [2024-11-04 14:06:07.784230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.834 ms 00:32:20.933 [2024-11-04 14:06:07.784240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.933 [2024-11-04 14:06:07.784865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.933 [2024-11-04 14:06:07.784882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:20.933 [2024-11-04 14:06:07.784901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:32:20.933 [2024-11-04 14:06:07.784910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.933 [2024-11-04 14:06:07.839847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.933 [2024-11-04 14:06:07.840130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:20.933 [2024-11-04 14:06:07.840154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.933 [2024-11-04 14:06:07.840166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.933 [2024-11-04 14:06:07.840243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.933 [2024-11-04 14:06:07.840255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:20.933 [2024-11-04 14:06:07.840272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.933 [2024-11-04 14:06:07.840283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.933 [2024-11-04 14:06:07.840364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.933 [2024-11-04 14:06:07.840377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:20.933 [2024-11-04 14:06:07.840389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.933 [2024-11-04 14:06:07.840399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.933 [2024-11-04 14:06:07.840417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.933 [2024-11-04 14:06:07.840428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:20.933 [2024-11-04 14:06:07.840438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.933 [2024-11-04 14:06:07.840453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:07.969179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:07.969241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:21.191 [2024-11-04 14:06:07.969258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:07.969269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.079972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:21.191 [2024-11-04 14:06:08.080058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:21.191 [2024-11-04 14:06:08.080208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:21.191 [2024-11-04 14:06:08.080332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:21.191 [2024-11-04 14:06:08.080499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:21.191 [2024-11-04 14:06:08.080614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:21.191 [2024-11-04 14:06:08.080690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.191 [2024-11-04 14:06:08.080752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:21.191 [2024-11-04 14:06:08.080763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.191 [2024-11-04 14:06:08.080812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.191 [2024-11-04 14:06:08.080978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.236 ms, result 0 00:32:22.633 00:32:22.633 00:32:22.633 14:06:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:24.532 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:24.532 14:06:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:24.532 14:06:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:24.532 14:06:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:24.532 14:06:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:24.532 Process with pid 79128 is not found 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79128 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79128 ']' 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 79128 00:32:24.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79128) - No such process 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 79128 is not found' 00:32:24.532 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:24.790 Remove shared memory files 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:24.790 ************************************ 00:32:24.790 END TEST ftl_dirty_shutdown 00:32:24.790 ************************************ 00:32:24.790 00:32:24.790 real 3m15.784s 00:32:24.790 user 3m42.396s 00:32:24.790 sys 0m39.030s 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:24.790 14:06:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:24.791 14:06:11 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:24.791 14:06:11 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:24.791 14:06:11 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:24.791 14:06:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:24.791 ************************************ 00:32:24.791 START TEST ftl_upgrade_shutdown 00:32:24.791 ************************************ 00:32:24.791 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:25.050 * Looking for test storage... 00:32:25.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:25.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.050 --rc genhtml_branch_coverage=1 00:32:25.050 --rc genhtml_function_coverage=1 00:32:25.050 --rc genhtml_legend=1 00:32:25.050 --rc geninfo_all_blocks=1 00:32:25.050 --rc geninfo_unexecuted_blocks=1 00:32:25.050 00:32:25.050 ' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:25.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.050 --rc genhtml_branch_coverage=1 00:32:25.050 --rc genhtml_function_coverage=1 00:32:25.050 --rc genhtml_legend=1 00:32:25.050 --rc geninfo_all_blocks=1 00:32:25.050 --rc geninfo_unexecuted_blocks=1 00:32:25.050 00:32:25.050 ' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:25.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.050 --rc genhtml_branch_coverage=1 00:32:25.050 --rc genhtml_function_coverage=1 00:32:25.050 --rc genhtml_legend=1 00:32:25.050 --rc geninfo_all_blocks=1 00:32:25.050 --rc geninfo_unexecuted_blocks=1 00:32:25.050 00:32:25.050 ' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:25.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.050 --rc genhtml_branch_coverage=1 00:32:25.050 --rc genhtml_function_coverage=1 00:32:25.050 --rc genhtml_legend=1 00:32:25.050 --rc geninfo_all_blocks=1 00:32:25.050 --rc geninfo_unexecuted_blocks=1 00:32:25.050 00:32:25.050 ' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:25.050 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81215 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81215 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81215 ']' 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:25.051 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:25.051 [2024-11-04 14:06:11.958671] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:25.051 [2024-11-04 14:06:11.959036] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81215 ] 00:32:25.308 [2024-11-04 14:06:12.144269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.567 [2024-11-04 14:06:12.318461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:26.501 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:32:26.759 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:27.018 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:32:27.018 { 00:32:27.018 "name": "basen1", 00:32:27.018 "aliases": [ 00:32:27.018 "6172d9c5-77f3-48a3-915b-aca92596f71c" 00:32:27.018 ], 00:32:27.018 "product_name": "NVMe disk", 00:32:27.018 "block_size": 4096, 00:32:27.018 "num_blocks": 1310720, 00:32:27.018 "uuid": "6172d9c5-77f3-48a3-915b-aca92596f71c", 00:32:27.018 "numa_id": -1, 00:32:27.018 "assigned_rate_limits": { 00:32:27.018 "rw_ios_per_sec": 0, 00:32:27.018 "rw_mbytes_per_sec": 0, 00:32:27.018 "r_mbytes_per_sec": 0, 00:32:27.018 "w_mbytes_per_sec": 0 00:32:27.018 }, 00:32:27.018 "claimed": true, 00:32:27.018 "claim_type": "read_many_write_one", 00:32:27.018 "zoned": false, 00:32:27.018 "supported_io_types": { 00:32:27.018 "read": true, 00:32:27.018 "write": true, 00:32:27.018 "unmap": true, 00:32:27.018 "flush": true, 00:32:27.018 "reset": true, 00:32:27.018 "nvme_admin": true, 00:32:27.018 "nvme_io": true, 00:32:27.018 "nvme_io_md": false, 00:32:27.018 "write_zeroes": true, 00:32:27.018 "zcopy": false, 00:32:27.018 "get_zone_info": false, 00:32:27.018 "zone_management": false, 00:32:27.018 "zone_append": false, 00:32:27.018 "compare": true, 00:32:27.018 "compare_and_write": false, 00:32:27.018 "abort": true, 00:32:27.018 "seek_hole": false, 00:32:27.018 "seek_data": false, 00:32:27.018 "copy": true, 00:32:27.018 "nvme_iov_md": false 00:32:27.018 }, 00:32:27.018 "driver_specific": { 00:32:27.018 "nvme": [ 00:32:27.018 { 00:32:27.018 "pci_address": "0000:00:11.0", 00:32:27.018 "trid": { 00:32:27.018 "trtype": "PCIe", 00:32:27.018 "traddr": "0000:00:11.0" 00:32:27.018 }, 00:32:27.018 "ctrlr_data": { 00:32:27.018 "cntlid": 0, 00:32:27.018 "vendor_id": "0x1b36", 00:32:27.018 "model_number": "QEMU NVMe Ctrl", 00:32:27.018 "serial_number": "12341", 00:32:27.018 "firmware_revision": "8.0.0", 00:32:27.018 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:27.018 "oacs": { 00:32:27.018 "security": 0, 00:32:27.018 "format": 1, 00:32:27.018 "firmware": 0, 00:32:27.018 "ns_manage": 1 00:32:27.018 }, 00:32:27.018 "multi_ctrlr": false, 00:32:27.018 "ana_reporting": false 00:32:27.018 }, 00:32:27.018 "vs": { 00:32:27.018 "nvme_version": "1.4" 00:32:27.018 }, 00:32:27.018 "ns_data": { 00:32:27.018 "id": 1, 00:32:27.018 "can_share": false 00:32:27.018 } 00:32:27.018 } 00:32:27.018 ], 00:32:27.018 "mp_policy": "active_passive" 00:32:27.018 } 00:32:27.018 } 00:32:27.018 ]' 00:32:27.018 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:32:27.018 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:32:27.018 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:27.276 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:27.276 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f723b8c5-6f54-4ce0-8d00-c89912bcfc5b 00:32:27.276 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:27.276 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f723b8c5-6f54-4ce0-8d00-c89912bcfc5b 00:32:27.843 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:28.100 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=9bcea4bc-2b5d-43d0-aab5-cf6d2beb5d3b 00:32:28.100 14:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 9bcea4bc-2b5d-43d0-aab5-cf6d2beb5d3b 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=52f943eb-ddbf-4775-8a0d-cd581b081ba3 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 52f943eb-ddbf-4775-8a0d-cd581b081ba3 ]] 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 52f943eb-ddbf-4775-8a0d-cd581b081ba3 5120 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=52f943eb-ddbf-4775-8a0d-cd581b081ba3 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 52f943eb-ddbf-4775-8a0d-cd581b081ba3 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=52f943eb-ddbf-4775-8a0d-cd581b081ba3 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52f943eb-ddbf-4775-8a0d-cd581b081ba3 00:32:28.358 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:32:28.358 { 00:32:28.358 "name": "52f943eb-ddbf-4775-8a0d-cd581b081ba3", 00:32:28.358 "aliases": [ 00:32:28.358 "lvs/basen1p0" 00:32:28.358 ], 00:32:28.358 "product_name": "Logical Volume", 00:32:28.358 "block_size": 4096, 00:32:28.358 "num_blocks": 5242880, 00:32:28.358 "uuid": "52f943eb-ddbf-4775-8a0d-cd581b081ba3", 00:32:28.358 "assigned_rate_limits": { 00:32:28.358 "rw_ios_per_sec": 0, 00:32:28.358 "rw_mbytes_per_sec": 0, 00:32:28.358 "r_mbytes_per_sec": 0, 00:32:28.358 "w_mbytes_per_sec": 0 00:32:28.358 }, 00:32:28.358 "claimed": false, 00:32:28.358 "zoned": false, 00:32:28.358 "supported_io_types": { 00:32:28.358 "read": true, 00:32:28.358 "write": true, 00:32:28.358 "unmap": true, 00:32:28.358 "flush": false, 00:32:28.359 "reset": true, 00:32:28.359 "nvme_admin": false, 00:32:28.359 "nvme_io": false, 00:32:28.359 "nvme_io_md": false, 00:32:28.359 "write_zeroes": true, 00:32:28.359 "zcopy": false, 00:32:28.359 "get_zone_info": false, 00:32:28.359 "zone_management": false, 00:32:28.359 "zone_append": false, 00:32:28.359 "compare": false, 00:32:28.359 "compare_and_write": false, 00:32:28.359 "abort": false, 00:32:28.359 "seek_hole": true, 00:32:28.359 "seek_data": true, 00:32:28.359 "copy": false, 00:32:28.359 "nvme_iov_md": false 00:32:28.359 }, 00:32:28.359 "driver_specific": { 00:32:28.359 "lvol": { 00:32:28.359 "lvol_store_uuid": "9bcea4bc-2b5d-43d0-aab5-cf6d2beb5d3b", 00:32:28.359 "base_bdev": "basen1", 00:32:28.359 "thin_provision": true, 00:32:28.359 "num_allocated_clusters": 0, 00:32:28.359 "snapshot": false, 00:32:28.359 "clone": false, 00:32:28.359 "esnap_clone": false 00:32:28.359 } 00:32:28.359 } 00:32:28.359 } 00:32:28.359 ]' 00:32:28.359 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:28.616 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:28.872 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:28.872 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:28.872 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:29.128 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:29.128 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:29.128 14:06:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 52f943eb-ddbf-4775-8a0d-cd581b081ba3 -c cachen1p0 --l2p_dram_limit 2 00:32:29.128 [2024-11-04 14:06:16.034132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.034195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:29.128 [2024-11-04 14:06:16.034218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:29.128 [2024-11-04 14:06:16.034230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.034316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.034329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:29.128 [2024-11-04 14:06:16.034343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:32:29.128 [2024-11-04 14:06:16.034353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.034378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:29.128 [2024-11-04 14:06:16.035524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:29.128 [2024-11-04 14:06:16.035563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.035586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:29.128 [2024-11-04 14:06:16.035602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.186 ms 00:32:29.128 [2024-11-04 14:06:16.035613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.035708] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 876708f9-d295-4d45-ada8-565130fddb77 00:32:29.128 [2024-11-04 14:06:16.037239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.037272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:29.128 [2024-11-04 14:06:16.037287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:29.128 [2024-11-04 14:06:16.037301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.044920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.044969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:29.128 [2024-11-04 14:06:16.044989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.566 ms 00:32:29.128 [2024-11-04 14:06:16.045004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.045086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.045109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:29.128 [2024-11-04 14:06:16.045122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:29.128 [2024-11-04 14:06:16.045139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.045223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.128 [2024-11-04 14:06:16.045240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:29.128 [2024-11-04 14:06:16.045252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:29.128 [2024-11-04 14:06:16.045272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.128 [2024-11-04 14:06:16.045299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:29.386 [2024-11-04 14:06:16.050906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.386 [2024-11-04 14:06:16.050946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:29.386 [2024-11-04 14:06:16.050963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.611 ms 00:32:29.386 [2024-11-04 14:06:16.050974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.386 [2024-11-04 14:06:16.051008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.386 [2024-11-04 14:06:16.051020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:29.386 [2024-11-04 14:06:16.051033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:29.386 [2024-11-04 14:06:16.051043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.386 [2024-11-04 14:06:16.051099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:29.386 [2024-11-04 14:06:16.051233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:29.386 [2024-11-04 14:06:16.051253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:29.386 [2024-11-04 14:06:16.051268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:29.386 [2024-11-04 14:06:16.051284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:29.386 [2024-11-04 14:06:16.051296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:29.386 [2024-11-04 14:06:16.051310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:29.386 [2024-11-04 14:06:16.051321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:29.386 [2024-11-04 14:06:16.051337] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:29.386 [2024-11-04 14:06:16.051347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:29.386 [2024-11-04 14:06:16.051360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.386 [2024-11-04 14:06:16.051371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:29.386 [2024-11-04 14:06:16.051384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:32:29.386 [2024-11-04 14:06:16.051394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.386 [2024-11-04 14:06:16.051473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.387 [2024-11-04 14:06:16.051485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:29.387 [2024-11-04 14:06:16.051500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:32:29.387 [2024-11-04 14:06:16.051522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.387 [2024-11-04 14:06:16.051642] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:29.387 [2024-11-04 14:06:16.051656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:29.387 [2024-11-04 14:06:16.051669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:29.387 [2024-11-04 14:06:16.051680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:29.387 [2024-11-04 14:06:16.051703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:29.387 [2024-11-04 14:06:16.051725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:29.387 [2024-11-04 14:06:16.051738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:29.387 [2024-11-04 14:06:16.051747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:29.387 [2024-11-04 14:06:16.051770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:29.387 [2024-11-04 14:06:16.051782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:29.387 [2024-11-04 14:06:16.051804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:29.387 [2024-11-04 14:06:16.051814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:29.387 [2024-11-04 14:06:16.051838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:29.387 [2024-11-04 14:06:16.051856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.051866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:29.387 [2024-11-04 14:06:16.051880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:29.387 [2024-11-04 14:06:16.051889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:29.387 [2024-11-04 14:06:16.051901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:29.387 [2024-11-04 14:06:16.051911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:29.387 [2024-11-04 14:06:16.051923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:29.387 [2024-11-04 14:06:16.051933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:29.387 [2024-11-04 14:06:16.051945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:29.387 [2024-11-04 14:06:16.051954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:29.387 [2024-11-04 14:06:16.051966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:29.387 [2024-11-04 14:06:16.051976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:29.387 [2024-11-04 14:06:16.051988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:29.387 [2024-11-04 14:06:16.051997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:29.387 [2024-11-04 14:06:16.052012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:29.387 [2024-11-04 14:06:16.052021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:29.387 [2024-11-04 14:06:16.052043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:29.387 [2024-11-04 14:06:16.052055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:29.387 [2024-11-04 14:06:16.052076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:29.387 [2024-11-04 14:06:16.052107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:29.387 [2024-11-04 14:06:16.052119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052128] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:29.387 [2024-11-04 14:06:16.052141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:29.387 [2024-11-04 14:06:16.052152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:29.387 [2024-11-04 14:06:16.052166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:29.387 [2024-11-04 14:06:16.052177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:29.387 [2024-11-04 14:06:16.052192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:29.387 [2024-11-04 14:06:16.052201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:29.387 [2024-11-04 14:06:16.052215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:29.387 [2024-11-04 14:06:16.052224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:29.387 [2024-11-04 14:06:16.052237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:29.387 [2024-11-04 14:06:16.052251] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:29.387 [2024-11-04 14:06:16.052266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:29.387 [2024-11-04 14:06:16.052294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:29.387 [2024-11-04 14:06:16.052328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:29.387 [2024-11-04 14:06:16.052341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:29.387 [2024-11-04 14:06:16.052352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:29.387 [2024-11-04 14:06:16.052365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:29.387 [2024-11-04 14:06:16.052450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:29.387 [2024-11-04 14:06:16.052465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:29.387 [2024-11-04 14:06:16.052490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:29.387 [2024-11-04 14:06:16.052501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:29.387 [2024-11-04 14:06:16.052515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:29.387 [2024-11-04 14:06:16.052526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.387 [2024-11-04 14:06:16.052540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:29.387 [2024-11-04 14:06:16.052568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.964 ms 00:32:29.387 [2024-11-04 14:06:16.052949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.387 [2024-11-04 14:06:16.053076] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:29.387 [2024-11-04 14:06:16.053213] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:31.914 [2024-11-04 14:06:18.486334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.486620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:31.914 [2024-11-04 14:06:18.486716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2433.241 ms 00:32:31.914 [2024-11-04 14:06:18.486760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.526829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.527222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:31.914 [2024-11-04 14:06:18.527326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.646 ms 00:32:31.914 [2024-11-04 14:06:18.527369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.527639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.527686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:31.914 [2024-11-04 14:06:18.527850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:31.914 [2024-11-04 14:06:18.527896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.574456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.574665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:31.914 [2024-11-04 14:06:18.574764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.466 ms 00:32:31.914 [2024-11-04 14:06:18.574812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.574872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.574892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:31.914 [2024-11-04 14:06:18.574905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:31.914 [2024-11-04 14:06:18.574919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.575420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.575447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:31.914 [2024-11-04 14:06:18.575460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:32:31.914 [2024-11-04 14:06:18.575474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.575528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.575544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:31.914 [2024-11-04 14:06:18.575558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:31.914 [2024-11-04 14:06:18.575587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.596344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.596407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:31.914 [2024-11-04 14:06:18.596424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.732 ms 00:32:31.914 [2024-11-04 14:06:18.596437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.610136] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:31.914 [2024-11-04 14:06:18.611243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.611275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:31.914 [2024-11-04 14:06:18.611294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.637 ms 00:32:31.914 [2024-11-04 14:06:18.611305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.651337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.651628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:31.914 [2024-11-04 14:06:18.651662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.969 ms 00:32:31.914 [2024-11-04 14:06:18.651673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.914 [2024-11-04 14:06:18.651807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.914 [2024-11-04 14:06:18.651826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:31.914 [2024-11-04 14:06:18.651845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:32:31.915 [2024-11-04 14:06:18.651856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.915 [2024-11-04 14:06:18.691876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.915 [2024-11-04 14:06:18.691940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:31.915 [2024-11-04 14:06:18.691961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.929 ms 00:32:31.915 [2024-11-04 14:06:18.691972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.915 [2024-11-04 14:06:18.731710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.915 [2024-11-04 14:06:18.731781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:31.915 [2024-11-04 14:06:18.731802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.676 ms 00:32:31.915 [2024-11-04 14:06:18.731812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:31.915 [2024-11-04 14:06:18.732576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:31.915 [2024-11-04 14:06:18.732601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:31.915 [2024-11-04 14:06:18.732616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:32:31.915 [2024-11-04 14:06:18.732627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.838199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.838493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:32.173 [2024-11-04 14:06:18.838531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.465 ms 00:32:32.173 [2024-11-04 14:06:18.838544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.882701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.882776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:32.173 [2024-11-04 14:06:18.882812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.961 ms 00:32:32.173 [2024-11-04 14:06:18.882823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.924662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.924728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:32.173 [2024-11-04 14:06:18.924747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.748 ms 00:32:32.173 [2024-11-04 14:06:18.924759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.965667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.965736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:32.173 [2024-11-04 14:06:18.965757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.806 ms 00:32:32.173 [2024-11-04 14:06:18.965772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.965855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.965868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:32.173 [2024-11-04 14:06:18.965885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:32.173 [2024-11-04 14:06:18.965897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.966025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:32.173 [2024-11-04 14:06:18.966038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:32.173 [2024-11-04 14:06:18.966056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:32:32.173 [2024-11-04 14:06:18.966066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:32.173 [2024-11-04 14:06:18.967331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2932.698 ms, result 0 00:32:32.173 { 00:32:32.173 "name": "ftl", 00:32:32.173 "uuid": "876708f9-d295-4d45-ada8-565130fddb77" 00:32:32.173 } 00:32:32.173 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:32.431 [2024-11-04 14:06:19.290447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.431 14:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:32.688 14:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:32.945 [2024-11-04 14:06:19.706843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:32.945 14:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:33.201 [2024-11-04 14:06:19.909042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:33.201 14:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:33.460 Fill FTL, iteration 1 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81332 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81332 /var/tmp/spdk.tgt.sock 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81332 ']' 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.460 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:33.460 [2024-11-04 14:06:20.360368] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:33.460 [2024-11-04 14:06:20.360494] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81332 ] 00:32:33.719 [2024-11-04 14:06:20.535174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.977 [2024-11-04 14:06:20.714356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.912 14:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.912 14:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:32:34.912 14:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:35.171 ftln1 00:32:35.171 14:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:35.171 14:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81332 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81332 ']' 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81332 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81332 00:32:35.429 killing process with pid 81332 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81332' 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81332 00:32:35.429 14:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81332 00:32:37.959 14:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:37.959 14:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:37.959 [2024-11-04 14:06:24.629343] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:37.959 [2024-11-04 14:06:24.629529] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81390 ] 00:32:37.959 [2024-11-04 14:06:24.817416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.217 [2024-11-04 14:06:24.939357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.592  [2024-11-04T14:06:27.449Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-04T14:06:28.827Z] Copying: 443/1024 [MB] (224 MBps) [2024-11-04T14:06:29.760Z] Copying: 666/1024 [MB] (223 MBps) [2024-11-04T14:06:30.018Z] Copying: 895/1024 [MB] (229 MBps) [2024-11-04T14:06:31.390Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:32:44.468 00:32:44.468 Calculate MD5 checksum, iteration 1 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:44.468 14:06:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:44.468 [2024-11-04 14:06:31.284856] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:44.468 [2024-11-04 14:06:31.285043] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81460 ] 00:32:44.726 [2024-11-04 14:06:31.476476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.726 [2024-11-04 14:06:31.598000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.643  [2024-11-04T14:06:34.131Z] Copying: 611/1024 [MB] (611 MBps) [2024-11-04T14:06:35.066Z] Copying: 1024/1024 [MB] (average 561 MBps) 00:32:48.144 00:32:48.144 14:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:48.144 14:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:50.043 Fill FTL, iteration 2 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=905354d2151deecc7b513bf25e0c403a 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:50.043 14:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:50.043 [2024-11-04 14:06:36.829462] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:50.043 [2024-11-04 14:06:36.829876] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81516 ] 00:32:50.302 [2024-11-04 14:06:37.021747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.302 [2024-11-04 14:06:37.194556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.249  [2024-11-04T14:06:39.738Z] Copying: 232/1024 [MB] (232 MBps) [2024-11-04T14:06:40.673Z] Copying: 462/1024 [MB] (230 MBps) [2024-11-04T14:06:42.048Z] Copying: 694/1024 [MB] (232 MBps) [2024-11-04T14:06:42.306Z] Copying: 928/1024 [MB] (234 MBps) [2024-11-04T14:06:43.241Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:32:56.319 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:56.578 Calculate MD5 checksum, iteration 2 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:56.578 14:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:56.578 [2024-11-04 14:06:43.343033] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:32:56.578 [2024-11-04 14:06:43.343170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81590 ] 00:32:56.837 [2024-11-04 14:06:43.510986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.837 [2024-11-04 14:06:43.632846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.738  [2024-11-04T14:06:46.225Z] Copying: 614/1024 [MB] (614 MBps) [2024-11-04T14:06:47.597Z] Copying: 1024/1024 [MB] (average 607 MBps) 00:33:00.675 00:33:00.675 14:06:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:00.675 14:06:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:02.580 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:02.580 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1cdc0766914ea9febdb17f3f53caf045 00:33:02.580 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:02.581 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:02.581 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:02.581 [2024-11-04 14:06:49.402718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.581 [2024-11-04 14:06:49.402773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:02.581 [2024-11-04 14:06:49.402790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:02.581 [2024-11-04 14:06:49.402803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.581 [2024-11-04 14:06:49.402832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.581 [2024-11-04 14:06:49.402844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:02.581 [2024-11-04 14:06:49.402856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:02.581 [2024-11-04 14:06:49.402870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.581 [2024-11-04 14:06:49.402892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:02.581 [2024-11-04 14:06:49.402903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:02.581 [2024-11-04 14:06:49.402914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:02.581 [2024-11-04 14:06:49.402924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:02.581 [2024-11-04 14:06:49.403000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.278 ms, result 0 00:33:02.581 true 00:33:02.581 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:02.839 { 00:33:02.839 "name": "ftl", 00:33:02.839 "properties": [ 00:33:02.839 { 00:33:02.839 "name": "superblock_version", 00:33:02.839 "value": 5, 00:33:02.839 "read-only": true 00:33:02.839 }, 00:33:02.839 { 00:33:02.839 "name": "base_device", 00:33:02.839 "bands": [ 00:33:02.839 { 00:33:02.839 "id": 0, 00:33:02.839 "state": "FREE", 00:33:02.839 "validity": 0.0 00:33:02.839 }, 00:33:02.839 { 00:33:02.839 "id": 1, 00:33:02.839 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 2, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 3, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 4, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 5, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 6, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 7, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 8, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 9, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 10, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 11, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 12, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 13, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 14, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 15, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 16, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 17, 00:33:02.840 "state": "FREE", 00:33:02.840 "validity": 0.0 00:33:02.840 } 00:33:02.840 ], 00:33:02.840 "read-only": true 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "name": "cache_device", 00:33:02.840 "type": "bdev", 00:33:02.840 "chunks": [ 00:33:02.840 { 00:33:02.840 "id": 0, 00:33:02.840 "state": "INACTIVE", 00:33:02.840 "utilization": 0.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 1, 00:33:02.840 "state": "CLOSED", 00:33:02.840 "utilization": 1.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 2, 00:33:02.840 "state": "CLOSED", 00:33:02.840 "utilization": 1.0 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 3, 00:33:02.840 "state": "OPEN", 00:33:02.840 "utilization": 0.001953125 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "id": 4, 00:33:02.840 "state": "OPEN", 00:33:02.840 "utilization": 0.0 00:33:02.840 } 00:33:02.840 ], 00:33:02.840 "read-only": true 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "name": "verbose_mode", 00:33:02.840 "value": true, 00:33:02.840 "unit": "", 00:33:02.840 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:02.840 }, 00:33:02.840 { 00:33:02.840 "name": "prep_upgrade_on_shutdown", 00:33:02.840 "value": false, 00:33:02.840 "unit": "", 00:33:02.840 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:02.841 } 00:33:02.841 ] 00:33:02.841 } 00:33:02.841 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:03.103 [2024-11-04 14:06:49.903204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.103 [2024-11-04 14:06:49.903267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:03.103 [2024-11-04 14:06:49.903284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:03.103 [2024-11-04 14:06:49.903295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.103 [2024-11-04 14:06:49.903322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.103 [2024-11-04 14:06:49.903333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:03.103 [2024-11-04 14:06:49.903344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:03.103 [2024-11-04 14:06:49.903354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.103 [2024-11-04 14:06:49.903374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.103 [2024-11-04 14:06:49.903386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:03.103 [2024-11-04 14:06:49.903396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:03.103 [2024-11-04 14:06:49.903405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.103 [2024-11-04 14:06:49.903466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:33:03.103 true 00:33:03.103 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:03.103 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:03.103 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:03.363 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:03.363 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:03.363 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:03.620 [2024-11-04 14:06:50.299584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.620 [2024-11-04 14:06:50.299660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:03.620 [2024-11-04 14:06:50.299676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:03.620 [2024-11-04 14:06:50.299687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.620 [2024-11-04 14:06:50.299715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.620 [2024-11-04 14:06:50.299726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:03.620 [2024-11-04 14:06:50.299737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:03.620 [2024-11-04 14:06:50.299747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.620 [2024-11-04 14:06:50.299767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:03.620 [2024-11-04 14:06:50.299779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:03.620 [2024-11-04 14:06:50.299789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:03.620 [2024-11-04 14:06:50.299799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:03.620 [2024-11-04 14:06:50.299860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.279 ms, result 0 00:33:03.620 true 00:33:03.620 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:03.620 { 00:33:03.620 "name": "ftl", 00:33:03.620 "properties": [ 00:33:03.620 { 00:33:03.620 "name": "superblock_version", 00:33:03.620 "value": 5, 00:33:03.620 "read-only": true 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "name": "base_device", 00:33:03.620 "bands": [ 00:33:03.620 { 00:33:03.620 "id": 0, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 1, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 2, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 3, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 4, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 5, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 6, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 7, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 8, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 9, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 10, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 11, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 12, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 13, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.620 "id": 14, 00:33:03.620 "state": "FREE", 00:33:03.620 "validity": 0.0 00:33:03.620 }, 00:33:03.620 { 00:33:03.621 "id": 15, 00:33:03.621 "state": "FREE", 00:33:03.621 "validity": 0.0 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 16, 00:33:03.621 "state": "FREE", 00:33:03.621 "validity": 0.0 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 17, 00:33:03.621 "state": "FREE", 00:33:03.621 "validity": 0.0 00:33:03.621 } 00:33:03.621 ], 00:33:03.621 "read-only": true 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "name": "cache_device", 00:33:03.621 "type": "bdev", 00:33:03.621 "chunks": [ 00:33:03.621 { 00:33:03.621 "id": 0, 00:33:03.621 "state": "INACTIVE", 00:33:03.621 "utilization": 0.0 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 1, 00:33:03.621 "state": "CLOSED", 00:33:03.621 "utilization": 1.0 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 2, 00:33:03.621 "state": "CLOSED", 00:33:03.621 "utilization": 1.0 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 3, 00:33:03.621 "state": "OPEN", 00:33:03.621 "utilization": 0.001953125 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "id": 4, 00:33:03.621 "state": "OPEN", 00:33:03.621 "utilization": 0.0 00:33:03.621 } 00:33:03.621 ], 00:33:03.621 "read-only": true 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "name": "verbose_mode", 00:33:03.621 "value": true, 00:33:03.621 "unit": "", 00:33:03.621 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:03.621 }, 00:33:03.621 { 00:33:03.621 "name": "prep_upgrade_on_shutdown", 00:33:03.621 "value": true, 00:33:03.621 "unit": "", 00:33:03.621 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:03.621 } 00:33:03.621 ] 00:33:03.621 } 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81215 ]] 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81215 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81215 ']' 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81215 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:03.621 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81215 00:33:03.879 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:03.879 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:03.879 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81215' 00:33:03.879 killing process with pid 81215 00:33:03.879 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81215 00:33:03.879 14:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81215 00:33:04.814 [2024-11-04 14:06:51.690849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:04.814 [2024-11-04 14:06:51.710048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.814 [2024-11-04 14:06:51.710105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:04.814 [2024-11-04 14:06:51.710121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:04.814 [2024-11-04 14:06:51.710133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.814 [2024-11-04 14:06:51.710157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:04.814 [2024-11-04 14:06:51.714413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.814 [2024-11-04 14:06:51.714446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:04.814 [2024-11-04 14:06:51.714459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.240 ms 00:33:04.814 [2024-11-04 14:06:51.714470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-11-04 14:06:59.860888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-11-04 14:06:59.860975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:12.959 [2024-11-04 14:06:59.861012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8146.348 ms 00:33:12.959 [2024-11-04 14:06:59.861037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-11-04 14:06:59.862646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-11-04 14:06:59.862689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:12.959 [2024-11-04 14:06:59.862708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.581 ms 00:33:12.959 [2024-11-04 14:06:59.862725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.959 [2024-11-04 14:06:59.864275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.959 [2024-11-04 14:06:59.864319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:12.959 [2024-11-04 14:06:59.864339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.504 ms 00:33:12.959 [2024-11-04 14:06:59.864356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.889372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.889464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:13.218 [2024-11-04 14:06:59.889490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.908 ms 00:33:13.218 [2024-11-04 14:06:59.889506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.901527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.901606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:13.218 [2024-11-04 14:06:59.901624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.930 ms 00:33:13.218 [2024-11-04 14:06:59.901636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.901757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.901772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:13.218 [2024-11-04 14:06:59.901793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:33:13.218 [2024-11-04 14:06:59.901803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.917585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.917654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:13.218 [2024-11-04 14:06:59.917672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.756 ms 00:33:13.218 [2024-11-04 14:06:59.917683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.933861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.933936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:13.218 [2024-11-04 14:06:59.933953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.121 ms 00:33:13.218 [2024-11-04 14:06:59.933964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.950900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.950977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:13.218 [2024-11-04 14:06:59.950996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.873 ms 00:33:13.218 [2024-11-04 14:06:59.951007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.968187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.218 [2024-11-04 14:06:59.968262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:13.218 [2024-11-04 14:06:59.968280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.042 ms 00:33:13.218 [2024-11-04 14:06:59.968292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.218 [2024-11-04 14:06:59.968347] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:13.218 [2024-11-04 14:06:59.968368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:13.218 [2024-11-04 14:06:59.968384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:13.218 [2024-11-04 14:06:59.968421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:13.218 [2024-11-04 14:06:59.968434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:13.218 [2024-11-04 14:06:59.968630] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:13.218 [2024-11-04 14:06:59.968642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 876708f9-d295-4d45-ada8-565130fddb77 00:33:13.218 [2024-11-04 14:06:59.968655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:13.218 [2024-11-04 14:06:59.968666] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:13.218 [2024-11-04 14:06:59.968677] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:13.218 [2024-11-04 14:06:59.968689] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:13.219 [2024-11-04 14:06:59.968700] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:13.219 [2024-11-04 14:06:59.968717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:13.219 [2024-11-04 14:06:59.968728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:13.219 [2024-11-04 14:06:59.968738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:13.219 [2024-11-04 14:06:59.968750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:13.219 [2024-11-04 14:06:59.968769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.219 [2024-11-04 14:06:59.968802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:13.219 [2024-11-04 14:06:59.968814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:33:13.219 [2024-11-04 14:06:59.968826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:06:59.991708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.219 [2024-11-04 14:06:59.991787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:13.219 [2024-11-04 14:06:59.991805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.827 ms 00:33:13.219 [2024-11-04 14:06:59.991829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:06:59.992550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.219 [2024-11-04 14:06:59.992584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:13.219 [2024-11-04 14:06:59.992598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:33:13.219 [2024-11-04 14:06:59.992609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:07:00.064917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.219 [2024-11-04 14:07:00.064984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:13.219 [2024-11-04 14:07:00.065016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.219 [2024-11-04 14:07:00.065027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:07:00.065086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.219 [2024-11-04 14:07:00.065097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:13.219 [2024-11-04 14:07:00.065108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.219 [2024-11-04 14:07:00.065118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:07:00.065242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.219 [2024-11-04 14:07:00.065273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:13.219 [2024-11-04 14:07:00.065285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.219 [2024-11-04 14:07:00.065296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.219 [2024-11-04 14:07:00.065322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.219 [2024-11-04 14:07:00.065334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:13.219 [2024-11-04 14:07:00.065346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.219 [2024-11-04 14:07:00.065357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.194523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.194615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:13.477 [2024-11-04 14:07:00.194632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.194657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.300351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.300428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:13.477 [2024-11-04 14:07:00.300444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.300457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.300627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.300644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:13.477 [2024-11-04 14:07:00.300656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.300666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.300729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.300741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:13.477 [2024-11-04 14:07:00.300751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.300774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.300951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.300966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:13.477 [2024-11-04 14:07:00.300978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.300989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.301028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.301047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:13.477 [2024-11-04 14:07:00.301058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.301069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.301111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.301123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:13.477 [2024-11-04 14:07:00.301134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.301145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.301197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:13.477 [2024-11-04 14:07:00.301210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:13.477 [2024-11-04 14:07:00.301221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:13.477 [2024-11-04 14:07:00.301232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.477 [2024-11-04 14:07:00.301378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8591.254 ms, result 0 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81801 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81801 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81801 ']' 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:16.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:16.791 14:07:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:17.051 [2024-11-04 14:07:03.751477] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:17.051 [2024-11-04 14:07:03.751699] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81801 ] 00:33:17.051 [2024-11-04 14:07:03.943471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.309 [2024-11-04 14:07:04.069797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.244 [2024-11-04 14:07:05.110030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:18.244 [2024-11-04 14:07:05.110124] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:18.505 [2024-11-04 14:07:05.258661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.258737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:18.506 [2024-11-04 14:07:05.258754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:18.506 [2024-11-04 14:07:05.258765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.258836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.258850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:18.506 [2024-11-04 14:07:05.258861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:18.506 [2024-11-04 14:07:05.258872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.258905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:18.506 [2024-11-04 14:07:05.260059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:18.506 [2024-11-04 14:07:05.260097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.260110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:18.506 [2024-11-04 14:07:05.260122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:33:18.506 [2024-11-04 14:07:05.260134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.261711] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:18.506 [2024-11-04 14:07:05.282823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.282903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:18.506 [2024-11-04 14:07:05.282931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.109 ms 00:33:18.506 [2024-11-04 14:07:05.282942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.283063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.283079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:18.506 [2024-11-04 14:07:05.283091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:18.506 [2024-11-04 14:07:05.283101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.291011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.291072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:18.506 [2024-11-04 14:07:05.291087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.784 ms 00:33:18.506 [2024-11-04 14:07:05.291097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.291189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.291207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:18.506 [2024-11-04 14:07:05.291236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:33:18.506 [2024-11-04 14:07:05.291247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.291311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.291325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:18.506 [2024-11-04 14:07:05.291341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:18.506 [2024-11-04 14:07:05.291352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.291384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:18.506 [2024-11-04 14:07:05.296590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.296633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:18.506 [2024-11-04 14:07:05.296648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.214 ms 00:33:18.506 [2024-11-04 14:07:05.296665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.296706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.296719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:18.506 [2024-11-04 14:07:05.296730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:18.506 [2024-11-04 14:07:05.296741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.296835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:18.506 [2024-11-04 14:07:05.296864] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:18.506 [2024-11-04 14:07:05.296908] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:18.506 [2024-11-04 14:07:05.296928] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:18.506 [2024-11-04 14:07:05.297049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:18.506 [2024-11-04 14:07:05.297064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:18.506 [2024-11-04 14:07:05.297079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:18.506 [2024-11-04 14:07:05.297094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297109] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:18.506 [2024-11-04 14:07:05.297212] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:18.506 [2024-11-04 14:07:05.297224] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:18.506 [2024-11-04 14:07:05.297235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:18.506 [2024-11-04 14:07:05.297248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.297259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:18.506 [2024-11-04 14:07:05.297275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:33:18.506 [2024-11-04 14:07:05.297287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.297382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.506 [2024-11-04 14:07:05.297394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:18.506 [2024-11-04 14:07:05.297406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:33:18.506 [2024-11-04 14:07:05.297422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.506 [2024-11-04 14:07:05.297530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:18.506 [2024-11-04 14:07:05.297552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:18.506 [2024-11-04 14:07:05.297582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:18.506 [2024-11-04 14:07:05.297620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:18.506 [2024-11-04 14:07:05.297643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:18.506 [2024-11-04 14:07:05.297654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:18.506 [2024-11-04 14:07:05.297667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:18.506 [2024-11-04 14:07:05.297689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:18.506 [2024-11-04 14:07:05.297700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:18.506 [2024-11-04 14:07:05.297723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:18.506 [2024-11-04 14:07:05.297733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:18.506 [2024-11-04 14:07:05.297755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:18.506 [2024-11-04 14:07:05.297766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:18.506 [2024-11-04 14:07:05.297788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:18.506 [2024-11-04 14:07:05.297798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:18.506 [2024-11-04 14:07:05.297821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:18.506 [2024-11-04 14:07:05.297831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:18.506 [2024-11-04 14:07:05.297868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:18.506 [2024-11-04 14:07:05.297879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:18.506 [2024-11-04 14:07:05.297901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:18.506 [2024-11-04 14:07:05.297911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:18.506 [2024-11-04 14:07:05.297933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:18.506 [2024-11-04 14:07:05.297944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:18.506 [2024-11-04 14:07:05.297965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:18.506 [2024-11-04 14:07:05.297976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.297987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:18.506 [2024-11-04 14:07:05.297998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:18.506 [2024-11-04 14:07:05.298008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.506 [2024-11-04 14:07:05.298019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:18.506 [2024-11-04 14:07:05.298030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:18.507 [2024-11-04 14:07:05.298041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.507 [2024-11-04 14:07:05.298052] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:18.507 [2024-11-04 14:07:05.298063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:18.507 [2024-11-04 14:07:05.298075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:18.507 [2024-11-04 14:07:05.298086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:18.507 [2024-11-04 14:07:05.298102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:18.507 [2024-11-04 14:07:05.298114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:18.507 [2024-11-04 14:07:05.298125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:18.507 [2024-11-04 14:07:05.298137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:18.507 [2024-11-04 14:07:05.298147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:18.507 [2024-11-04 14:07:05.298159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:18.507 [2024-11-04 14:07:05.298171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:18.507 [2024-11-04 14:07:05.298186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:18.507 [2024-11-04 14:07:05.298212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:18.507 [2024-11-04 14:07:05.298248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:18.507 [2024-11-04 14:07:05.298260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:18.507 [2024-11-04 14:07:05.298273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:18.507 [2024-11-04 14:07:05.298285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:18.507 [2024-11-04 14:07:05.298371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:18.507 [2024-11-04 14:07:05.298385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:18.507 [2024-11-04 14:07:05.298410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:18.507 [2024-11-04 14:07:05.298422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:18.507 [2024-11-04 14:07:05.298435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:18.507 [2024-11-04 14:07:05.298448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.507 [2024-11-04 14:07:05.298460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:18.507 [2024-11-04 14:07:05.298472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.983 ms 00:33:18.507 [2024-11-04 14:07:05.298483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.507 [2024-11-04 14:07:05.298541] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:18.507 [2024-11-04 14:07:05.298556] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:21.057 [2024-11-04 14:07:07.740046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.740136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:21.057 [2024-11-04 14:07:07.740164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2441.488 ms 00:33:21.057 [2024-11-04 14:07:07.740182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.781168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.781242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:21.057 [2024-11-04 14:07:07.781267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.668 ms 00:33:21.057 [2024-11-04 14:07:07.781283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.781461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.781496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:21.057 [2024-11-04 14:07:07.781517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:21.057 [2024-11-04 14:07:07.781534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.829973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.830046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:21.057 [2024-11-04 14:07:07.830069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.324 ms 00:33:21.057 [2024-11-04 14:07:07.830094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.830187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.830207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:21.057 [2024-11-04 14:07:07.830227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:21.057 [2024-11-04 14:07:07.830242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.830816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.830850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:21.057 [2024-11-04 14:07:07.830871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:33:21.057 [2024-11-04 14:07:07.830889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.830982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.831011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:21.057 [2024-11-04 14:07:07.831032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:21.057 [2024-11-04 14:07:07.831050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.851983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.852052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:21.057 [2024-11-04 14:07:07.852077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.885 ms 00:33:21.057 [2024-11-04 14:07:07.852093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.872762] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:21.057 [2024-11-04 14:07:07.872847] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:21.057 [2024-11-04 14:07:07.872875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.872895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:21.057 [2024-11-04 14:07:07.872917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.548 ms 00:33:21.057 [2024-11-04 14:07:07.872932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.895168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.895256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:21.057 [2024-11-04 14:07:07.895283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.102 ms 00:33:21.057 [2024-11-04 14:07:07.895302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.915448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.915543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:21.057 [2024-11-04 14:07:07.915578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.970 ms 00:33:21.057 [2024-11-04 14:07:07.915595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.936661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.936751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:21.057 [2024-11-04 14:07:07.936786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.859 ms 00:33:21.057 [2024-11-04 14:07:07.936801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.057 [2024-11-04 14:07:07.937869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.057 [2024-11-04 14:07:07.937912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:21.057 [2024-11-04 14:07:07.937936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.798 ms 00:33:21.057 [2024-11-04 14:07:07.937955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.045420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.045520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:21.316 [2024-11-04 14:07:08.045539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 107.413 ms 00:33:21.316 [2024-11-04 14:07:08.045551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.059668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:21.316 [2024-11-04 14:07:08.060818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.060851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:21.316 [2024-11-04 14:07:08.060866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.169 ms 00:33:21.316 [2024-11-04 14:07:08.060878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.061024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.061042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:21.316 [2024-11-04 14:07:08.061054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:21.316 [2024-11-04 14:07:08.061065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.061135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.061148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:21.316 [2024-11-04 14:07:08.061159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:21.316 [2024-11-04 14:07:08.061169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.061192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.061203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:21.316 [2024-11-04 14:07:08.061213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:21.316 [2024-11-04 14:07:08.061228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.061264] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:21.316 [2024-11-04 14:07:08.061277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.061287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:21.316 [2024-11-04 14:07:08.061297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:21.316 [2024-11-04 14:07:08.061308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.100904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.100989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:21.316 [2024-11-04 14:07:08.101008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.570 ms 00:33:21.316 [2024-11-04 14:07:08.101019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.101146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.316 [2024-11-04 14:07:08.101160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:21.316 [2024-11-04 14:07:08.101171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:33:21.316 [2024-11-04 14:07:08.101182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.316 [2024-11-04 14:07:08.102452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2843.292 ms, result 0 00:33:21.316 [2024-11-04 14:07:08.117375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.316 [2024-11-04 14:07:08.133416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:21.316 [2024-11-04 14:07:08.143219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:21.316 14:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:21.316 14:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:33:21.316 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:21.316 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:21.316 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:21.575 [2024-11-04 14:07:08.363264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.575 [2024-11-04 14:07:08.363515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:21.575 [2024-11-04 14:07:08.363543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:21.575 [2024-11-04 14:07:08.363561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.575 [2024-11-04 14:07:08.363620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.575 [2024-11-04 14:07:08.363633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:21.575 [2024-11-04 14:07:08.363644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:21.575 [2024-11-04 14:07:08.363655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.575 [2024-11-04 14:07:08.363677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.575 [2024-11-04 14:07:08.363688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:21.575 [2024-11-04 14:07:08.363698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:21.575 [2024-11-04 14:07:08.363709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.575 [2024-11-04 14:07:08.363777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.500 ms, result 0 00:33:21.575 true 00:33:21.575 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:21.834 { 00:33:21.834 "name": "ftl", 00:33:21.834 "properties": [ 00:33:21.834 { 00:33:21.834 "name": "superblock_version", 00:33:21.834 "value": 5, 00:33:21.834 "read-only": true 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "name": "base_device", 00:33:21.834 "bands": [ 00:33:21.834 { 00:33:21.834 "id": 0, 00:33:21.834 "state": "CLOSED", 00:33:21.834 "validity": 1.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 1, 00:33:21.834 "state": "CLOSED", 00:33:21.834 "validity": 1.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 2, 00:33:21.834 "state": "CLOSED", 00:33:21.834 "validity": 0.007843137254901933 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 3, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 4, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 5, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 6, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 7, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 8, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 9, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 10, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 11, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 12, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 13, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 14, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 15, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 16, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 17, 00:33:21.834 "state": "FREE", 00:33:21.834 "validity": 0.0 00:33:21.834 } 00:33:21.834 ], 00:33:21.834 "read-only": true 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "name": "cache_device", 00:33:21.834 "type": "bdev", 00:33:21.834 "chunks": [ 00:33:21.834 { 00:33:21.834 "id": 0, 00:33:21.834 "state": "INACTIVE", 00:33:21.834 "utilization": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 1, 00:33:21.834 "state": "OPEN", 00:33:21.834 "utilization": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 2, 00:33:21.834 "state": "OPEN", 00:33:21.834 "utilization": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 3, 00:33:21.834 "state": "FREE", 00:33:21.834 "utilization": 0.0 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "id": 4, 00:33:21.834 "state": "FREE", 00:33:21.834 "utilization": 0.0 00:33:21.834 } 00:33:21.834 ], 00:33:21.834 "read-only": true 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "name": "verbose_mode", 00:33:21.834 "value": true, 00:33:21.834 "unit": "", 00:33:21.834 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:21.834 }, 00:33:21.834 { 00:33:21.834 "name": "prep_upgrade_on_shutdown", 00:33:21.834 "value": false, 00:33:21.834 "unit": "", 00:33:21.834 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:21.834 } 00:33:21.834 ] 00:33:21.834 } 00:33:21.834 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:21.834 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:21.834 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:22.093 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:22.093 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:22.093 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:22.093 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:22.093 14:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:22.353 Validate MD5 checksum, iteration 1 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:22.353 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:22.354 14:07:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:22.623 [2024-11-04 14:07:09.324480] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:22.623 [2024-11-04 14:07:09.325006] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81871 ] 00:33:22.623 [2024-11-04 14:07:09.518788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.883 [2024-11-04 14:07:09.642636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.787  [2024-11-04T14:07:11.968Z] Copying: 632/1024 [MB] (632 MBps) [2024-11-04T14:07:13.896Z] Copying: 1024/1024 [MB] (average 622 MBps) 00:33:26.974 00:33:26.974 14:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:26.974 14:07:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=905354d2151deecc7b513bf25e0c403a 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 905354d2151deecc7b513bf25e0c403a != \9\0\5\3\5\4\d\2\1\5\1\d\e\e\c\c\7\b\5\1\3\b\f\2\5\e\0\c\4\0\3\a ]] 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:28.879 Validate MD5 checksum, iteration 2 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:28.879 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:28.879 [2024-11-04 14:07:15.509023] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:28.879 [2024-11-04 14:07:15.509210] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81938 ] 00:33:28.879 [2024-11-04 14:07:15.691972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.138 [2024-11-04 14:07:15.805778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.040  [2024-11-04T14:07:18.237Z] Copying: 676/1024 [MB] (676 MBps) [2024-11-04T14:07:20.170Z] Copying: 1024/1024 [MB] (average 649 MBps) 00:33:33.248 00:33:33.248 14:07:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:33.248 14:07:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1cdc0766914ea9febdb17f3f53caf045 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1cdc0766914ea9febdb17f3f53caf045 != \1\c\d\c\0\7\6\6\9\1\4\e\a\9\f\e\b\d\b\1\7\f\3\f\5\3\c\a\f\0\4\5 ]] 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81801 ]] 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81801 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82005 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82005 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 82005 ']' 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:35.150 14:07:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:35.150 [2024-11-04 14:07:21.732354] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:35.150 [2024-11-04 14:07:21.732486] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82005 ] 00:33:35.150 [2024-11-04 14:07:21.905492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.150 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81801 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:35.150 [2024-11-04 14:07:22.023771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.087 [2024-11-04 14:07:22.996958] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:36.087 [2024-11-04 14:07:22.997032] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:36.347 [2024-11-04 14:07:23.144293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.347 [2024-11-04 14:07:23.144359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:36.347 [2024-11-04 14:07:23.144376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:36.347 [2024-11-04 14:07:23.144387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.347 [2024-11-04 14:07:23.144447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.347 [2024-11-04 14:07:23.144460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:36.348 [2024-11-04 14:07:23.144471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:36.348 [2024-11-04 14:07:23.144481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.144512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:36.348 [2024-11-04 14:07:23.145734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:36.348 [2024-11-04 14:07:23.145911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.145928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:36.348 [2024-11-04 14:07:23.145942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.408 ms 00:33:36.348 [2024-11-04 14:07:23.145954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.146389] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:36.348 [2024-11-04 14:07:23.172295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.172362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:36.348 [2024-11-04 14:07:23.172380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.902 ms 00:33:36.348 [2024-11-04 14:07:23.172391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.187380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.187420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:36.348 [2024-11-04 14:07:23.187455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:36.348 [2024-11-04 14:07:23.187466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.188029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.188050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:36.348 [2024-11-04 14:07:23.188061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:33:36.348 [2024-11-04 14:07:23.188072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.188136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.188153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:36.348 [2024-11-04 14:07:23.188164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:36.348 [2024-11-04 14:07:23.188175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.188206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.188217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:36.348 [2024-11-04 14:07:23.188227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:36.348 [2024-11-04 14:07:23.188238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.188267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:36.348 [2024-11-04 14:07:23.192465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.192496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:36.348 [2024-11-04 14:07:23.192508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.206 ms 00:33:36.348 [2024-11-04 14:07:23.192534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.192567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.192578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:36.348 [2024-11-04 14:07:23.192604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:36.348 [2024-11-04 14:07:23.192614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.192657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:36.348 [2024-11-04 14:07:23.192680] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:36.348 [2024-11-04 14:07:23.192716] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:36.348 [2024-11-04 14:07:23.192737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:36.348 [2024-11-04 14:07:23.192836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:36.348 [2024-11-04 14:07:23.192850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:36.348 [2024-11-04 14:07:23.192863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:36.348 [2024-11-04 14:07:23.192876] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:36.348 [2024-11-04 14:07:23.192889] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:36.348 [2024-11-04 14:07:23.192900] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:36.348 [2024-11-04 14:07:23.192910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:36.348 [2024-11-04 14:07:23.192920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:36.348 [2024-11-04 14:07:23.192930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:36.348 [2024-11-04 14:07:23.192940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.192954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:36.348 [2024-11-04 14:07:23.192965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:33:36.348 [2024-11-04 14:07:23.192975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.193051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.348 [2024-11-04 14:07:23.193061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:36.348 [2024-11-04 14:07:23.193072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:33:36.348 [2024-11-04 14:07:23.193082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.348 [2024-11-04 14:07:23.193175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:36.348 [2024-11-04 14:07:23.193186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:36.348 [2024-11-04 14:07:23.193200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:36.348 [2024-11-04 14:07:23.193230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:36.348 [2024-11-04 14:07:23.193251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:36.348 [2024-11-04 14:07:23.193261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:36.348 [2024-11-04 14:07:23.193270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:36.348 [2024-11-04 14:07:23.193289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:36.348 [2024-11-04 14:07:23.193298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:36.348 [2024-11-04 14:07:23.193318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:36.348 [2024-11-04 14:07:23.193327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:36.348 [2024-11-04 14:07:23.193346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:36.348 [2024-11-04 14:07:23.193355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:36.348 [2024-11-04 14:07:23.193373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:36.348 [2024-11-04 14:07:23.193383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:36.348 [2024-11-04 14:07:23.193412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:36.348 [2024-11-04 14:07:23.193422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:36.348 [2024-11-04 14:07:23.193440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:36.348 [2024-11-04 14:07:23.193450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:36.348 [2024-11-04 14:07:23.193469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:36.348 [2024-11-04 14:07:23.193478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:36.348 [2024-11-04 14:07:23.193497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:36.348 [2024-11-04 14:07:23.193506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:36.348 [2024-11-04 14:07:23.193524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:36.348 [2024-11-04 14:07:23.193533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:36.348 [2024-11-04 14:07:23.193551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:36.348 [2024-11-04 14:07:23.193594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:36.348 [2024-11-04 14:07:23.193603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.348 [2024-11-04 14:07:23.193613] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:36.348 [2024-11-04 14:07:23.193623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:36.349 [2024-11-04 14:07:23.193633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:36.349 [2024-11-04 14:07:23.193643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:36.349 [2024-11-04 14:07:23.193653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:36.349 [2024-11-04 14:07:23.193663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:36.349 [2024-11-04 14:07:23.193672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:36.349 [2024-11-04 14:07:23.193682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:36.349 [2024-11-04 14:07:23.193690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:36.349 [2024-11-04 14:07:23.193700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:36.349 [2024-11-04 14:07:23.193710] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:36.349 [2024-11-04 14:07:23.193723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:36.349 [2024-11-04 14:07:23.193744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:36.349 [2024-11-04 14:07:23.193776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:36.349 [2024-11-04 14:07:23.193787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:36.349 [2024-11-04 14:07:23.193797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:36.349 [2024-11-04 14:07:23.193807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:36.349 [2024-11-04 14:07:23.193880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:36.349 [2024-11-04 14:07:23.193891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:36.349 [2024-11-04 14:07:23.193922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:36.349 [2024-11-04 14:07:23.193932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:36.349 [2024-11-04 14:07:23.193944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:36.349 [2024-11-04 14:07:23.193955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.349 [2024-11-04 14:07:23.193969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:36.349 [2024-11-04 14:07:23.193979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:33:36.349 [2024-11-04 14:07:23.193990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.349 [2024-11-04 14:07:23.232015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.349 [2024-11-04 14:07:23.232201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:36.349 [2024-11-04 14:07:23.232226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.966 ms 00:33:36.349 [2024-11-04 14:07:23.232237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.349 [2024-11-04 14:07:23.232292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.349 [2024-11-04 14:07:23.232304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:36.349 [2024-11-04 14:07:23.232315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:36.349 [2024-11-04 14:07:23.232325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.608 [2024-11-04 14:07:23.280641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.608 [2024-11-04 14:07:23.280701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:36.609 [2024-11-04 14:07:23.280718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.240 ms 00:33:36.609 [2024-11-04 14:07:23.280729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.280808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.280821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:36.609 [2024-11-04 14:07:23.280832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:36.609 [2024-11-04 14:07:23.280842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.281013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.281028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:36.609 [2024-11-04 14:07:23.281040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:33:36.609 [2024-11-04 14:07:23.281050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.281094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.281106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:36.609 [2024-11-04 14:07:23.281116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:36.609 [2024-11-04 14:07:23.281126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.301102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.301158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:36.609 [2024-11-04 14:07:23.301175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.946 ms 00:33:36.609 [2024-11-04 14:07:23.301186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.301367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.301385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:36.609 [2024-11-04 14:07:23.301396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:36.609 [2024-11-04 14:07:23.301407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.340124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.340169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:36.609 [2024-11-04 14:07:23.340184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.691 ms 00:33:36.609 [2024-11-04 14:07:23.340195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.355085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.355259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:36.609 [2024-11-04 14:07:23.355291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.685 ms 00:33:36.609 [2024-11-04 14:07:23.355302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.442613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.442681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:36.609 [2024-11-04 14:07:23.442705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.238 ms 00:33:36.609 [2024-11-04 14:07:23.442716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.442923] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:36.609 [2024-11-04 14:07:23.443050] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:36.609 [2024-11-04 14:07:23.443162] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:36.609 [2024-11-04 14:07:23.443284] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:36.609 [2024-11-04 14:07:23.443297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.443308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:36.609 [2024-11-04 14:07:23.443319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:33:36.609 [2024-11-04 14:07:23.443329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.443436] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:36.609 [2024-11-04 14:07:23.443452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.443466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:36.609 [2024-11-04 14:07:23.443477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:36.609 [2024-11-04 14:07:23.443487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.466748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.466798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:36.609 [2024-11-04 14:07:23.466813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.231 ms 00:33:36.609 [2024-11-04 14:07:23.466824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.481294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.481336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:36.609 [2024-11-04 14:07:23.481350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:36.609 [2024-11-04 14:07:23.481361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.609 [2024-11-04 14:07:23.481464] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:36.609 [2024-11-04 14:07:23.481684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.609 [2024-11-04 14:07:23.481701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:36.609 [2024-11-04 14:07:23.481713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:33:36.609 [2024-11-04 14:07:23.481723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.202 [2024-11-04 14:07:24.011653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.202 [2024-11-04 14:07:24.011919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:37.202 [2024-11-04 14:07:24.011950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 528.683 ms 00:33:37.202 [2024-11-04 14:07:24.011963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.202 [2024-11-04 14:07:24.017829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.202 [2024-11-04 14:07:24.017885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:37.202 [2024-11-04 14:07:24.017900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:33:37.202 [2024-11-04 14:07:24.017911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.202 [2024-11-04 14:07:24.018340] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:37.202 [2024-11-04 14:07:24.018366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.202 [2024-11-04 14:07:24.018378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:37.202 [2024-11-04 14:07:24.018389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:33:37.202 [2024-11-04 14:07:24.018400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.202 [2024-11-04 14:07:24.018432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.202 [2024-11-04 14:07:24.018444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:37.202 [2024-11-04 14:07:24.018455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:37.202 [2024-11-04 14:07:24.018466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.202 [2024-11-04 14:07:24.018507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 537.041 ms, result 0 00:33:37.202 [2024-11-04 14:07:24.018551] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:37.202 [2024-11-04 14:07:24.018652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.202 [2024-11-04 14:07:24.018664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:37.202 [2024-11-04 14:07:24.018674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:33:37.202 [2024-11-04 14:07:24.018683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.771 [2024-11-04 14:07:24.518629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.771 [2024-11-04 14:07:24.518701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:37.771 [2024-11-04 14:07:24.518719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 498.744 ms 00:33:37.771 [2024-11-04 14:07:24.518731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.771 [2024-11-04 14:07:24.524385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.771 [2024-11-04 14:07:24.524426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:37.771 [2024-11-04 14:07:24.524439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.883 ms 00:33:37.771 [2024-11-04 14:07:24.524450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.771 [2024-11-04 14:07:24.524791] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:37.771 [2024-11-04 14:07:24.524814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.771 [2024-11-04 14:07:24.524824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:37.771 [2024-11-04 14:07:24.524836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:33:37.771 [2024-11-04 14:07:24.524846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.771 [2024-11-04 14:07:24.524878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.771 [2024-11-04 14:07:24.524890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:37.771 [2024-11-04 14:07:24.524901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:37.771 [2024-11-04 14:07:24.524911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.771 [2024-11-04 14:07:24.524951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 506.395 ms, result 0 00:33:37.772 [2024-11-04 14:07:24.524994] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:37.772 [2024-11-04 14:07:24.525007] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:37.772 [2024-11-04 14:07:24.525020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.525030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:37.772 [2024-11-04 14:07:24.525042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1043.571 ms 00:33:37.772 [2024-11-04 14:07:24.525052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.525082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.525093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:37.772 [2024-11-04 14:07:24.525108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:37.772 [2024-11-04 14:07:24.525118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.536735] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:37.772 [2024-11-04 14:07:24.536880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.536893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:37.772 [2024-11-04 14:07:24.536906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.741 ms 00:33:37.772 [2024-11-04 14:07:24.536917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.537527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.537550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:37.772 [2024-11-04 14:07:24.537586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:33:37.772 [2024-11-04 14:07:24.537597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.539644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.539669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:37.772 [2024-11-04 14:07:24.539682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.023 ms 00:33:37.772 [2024-11-04 14:07:24.539692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.539745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.539758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:37.772 [2024-11-04 14:07:24.539769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:37.772 [2024-11-04 14:07:24.539783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.539884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.539897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:37.772 [2024-11-04 14:07:24.539907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:37.772 [2024-11-04 14:07:24.539917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.539938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.539948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:37.772 [2024-11-04 14:07:24.539959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:37.772 [2024-11-04 14:07:24.539969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.540002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:37.772 [2024-11-04 14:07:24.540018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.540028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:37.772 [2024-11-04 14:07:24.540038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:37.772 [2024-11-04 14:07:24.540048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.540099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.772 [2024-11-04 14:07:24.540111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:37.772 [2024-11-04 14:07:24.540121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:37.772 [2024-11-04 14:07:24.540132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.772 [2024-11-04 14:07:24.541225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1396.422 ms, result 0 00:33:37.772 [2024-11-04 14:07:24.553535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.772 [2024-11-04 14:07:24.569536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:37.772 [2024-11-04 14:07:24.579128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:37.772 Validate MD5 checksum, iteration 1 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:37.772 14:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:38.031 [2024-11-04 14:07:24.721854] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:38.031 [2024-11-04 14:07:24.722264] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82040 ] 00:33:38.031 [2024-11-04 14:07:24.914895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.290 [2024-11-04 14:07:25.086380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.194  [2024-11-04T14:07:27.375Z] Copying: 651/1024 [MB] (651 MBps) [2024-11-04T14:07:30.678Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:33:43.756 00:33:43.756 14:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:43.756 14:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:45.133 Validate MD5 checksum, iteration 2 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=905354d2151deecc7b513bf25e0c403a 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 905354d2151deecc7b513bf25e0c403a != \9\0\5\3\5\4\d\2\1\5\1\d\e\e\c\c\7\b\5\1\3\b\f\2\5\e\0\c\4\0\3\a ]] 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:45.133 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:45.133 [2024-11-04 14:07:32.012208] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:45.133 [2024-11-04 14:07:32.012383] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82119 ] 00:33:45.392 [2024-11-04 14:07:32.202706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.652 [2024-11-04 14:07:32.362845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.557  [2024-11-04T14:07:34.738Z] Copying: 662/1024 [MB] (662 MBps) [2024-11-04T14:07:36.651Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:33:49.729 00:33:49.729 14:07:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:49.729 14:07:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1cdc0766914ea9febdb17f3f53caf045 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1cdc0766914ea9febdb17f3f53caf045 != \1\c\d\c\0\7\6\6\9\1\4\e\a\9\f\e\b\d\b\1\7\f\3\f\5\3\c\a\f\0\4\5 ]] 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:51.630 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82005 ]] 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82005 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 82005 ']' 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 82005 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82005 00:33:51.889 killing process with pid 82005 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82005' 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 82005 00:33:51.889 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 82005 00:33:53.267 [2024-11-04 14:07:39.793374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:53.267 [2024-11-04 14:07:39.814047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.814096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:53.267 [2024-11-04 14:07:39.814112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:53.267 [2024-11-04 14:07:39.814124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.814148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:53.267 [2024-11-04 14:07:39.818506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.818541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:53.267 [2024-11-04 14:07:39.818554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.341 ms 00:33:53.267 [2024-11-04 14:07:39.818585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.818794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.818807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:53.267 [2024-11-04 14:07:39.818819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:33:53.267 [2024-11-04 14:07:39.818834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.819940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.819970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:53.267 [2024-11-04 14:07:39.819983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.079 ms 00:33:53.267 [2024-11-04 14:07:39.819994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.821023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.821051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:53.267 [2024-11-04 14:07:39.821064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:33:53.267 [2024-11-04 14:07:39.821074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.836582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.836620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:53.267 [2024-11-04 14:07:39.836634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.470 ms 00:33:53.267 [2024-11-04 14:07:39.836650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.844814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.844854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:53.267 [2024-11-04 14:07:39.844868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.126 ms 00:33:53.267 [2024-11-04 14:07:39.844879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.844981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.844996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:53.267 [2024-11-04 14:07:39.845007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:33:53.267 [2024-11-04 14:07:39.845018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.859596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.859647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:53.267 [2024-11-04 14:07:39.859661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.553 ms 00:33:53.267 [2024-11-04 14:07:39.859671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.874702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.874737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:53.267 [2024-11-04 14:07:39.874750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.994 ms 00:33:53.267 [2024-11-04 14:07:39.874760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.889445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.889498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:53.267 [2024-11-04 14:07:39.889512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.649 ms 00:33:53.267 [2024-11-04 14:07:39.889522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.904000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.267 [2024-11-04 14:07:39.904033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:53.267 [2024-11-04 14:07:39.904046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.387 ms 00:33:53.267 [2024-11-04 14:07:39.904071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.267 [2024-11-04 14:07:39.904106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:53.267 [2024-11-04 14:07:39.904123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:53.267 [2024-11-04 14:07:39.904136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:53.267 [2024-11-04 14:07:39.904147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:53.267 [2024-11-04 14:07:39.904158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:53.267 [2024-11-04 14:07:39.904169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:53.267 [2024-11-04 14:07:39.904181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:53.267 [2024-11-04 14:07:39.904191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:53.267 [2024-11-04 14:07:39.904201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:53.268 [2024-11-04 14:07:39.904318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:53.268 [2024-11-04 14:07:39.904328] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 876708f9-d295-4d45-ada8-565130fddb77 00:33:53.268 [2024-11-04 14:07:39.904339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:53.268 [2024-11-04 14:07:39.904349] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:53.268 [2024-11-04 14:07:39.904359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:53.268 [2024-11-04 14:07:39.904369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:53.268 [2024-11-04 14:07:39.904378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:53.268 [2024-11-04 14:07:39.904388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:53.268 [2024-11-04 14:07:39.904398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:53.268 [2024-11-04 14:07:39.904407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:53.268 [2024-11-04 14:07:39.904416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:53.268 [2024-11-04 14:07:39.904427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.268 [2024-11-04 14:07:39.904443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:53.268 [2024-11-04 14:07:39.904454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:33:53.268 [2024-11-04 14:07:39.904464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.924430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.268 [2024-11-04 14:07:39.924607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:53.268 [2024-11-04 14:07:39.924629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.935 ms 00:33:53.268 [2024-11-04 14:07:39.924657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.925272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.268 [2024-11-04 14:07:39.925287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:53.268 [2024-11-04 14:07:39.925299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.588 ms 00:33:53.268 [2024-11-04 14:07:39.925309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.991440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.268 [2024-11-04 14:07:39.991649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:53.268 [2024-11-04 14:07:39.991673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.268 [2024-11-04 14:07:39.991685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.991733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.268 [2024-11-04 14:07:39.991744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:53.268 [2024-11-04 14:07:39.991755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.268 [2024-11-04 14:07:39.991765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.991851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.268 [2024-11-04 14:07:39.991865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:53.268 [2024-11-04 14:07:39.991876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.268 [2024-11-04 14:07:39.991886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:39.991904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.268 [2024-11-04 14:07:39.991921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:53.268 [2024-11-04 14:07:39.991932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.268 [2024-11-04 14:07:39.991942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.268 [2024-11-04 14:07:40.121759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.268 [2024-11-04 14:07:40.121820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:53.268 [2024-11-04 14:07:40.121835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.268 [2024-11-04 14:07:40.121863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:53.527 [2024-11-04 14:07:40.222212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:53.527 [2024-11-04 14:07:40.222379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:53.527 [2024-11-04 14:07:40.222473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:53.527 [2024-11-04 14:07:40.222672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:53.527 [2024-11-04 14:07:40.222746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:53.527 [2024-11-04 14:07:40.222820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.222873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:53.527 [2024-11-04 14:07:40.222885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:53.527 [2024-11-04 14:07:40.222899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:53.527 [2024-11-04 14:07:40.222909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.527 [2024-11-04 14:07:40.223029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 408.948 ms, result 0 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:54.905 Remove shared memory files 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81801 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:54.905 ************************************ 00:33:54.905 END TEST ftl_upgrade_shutdown 00:33:54.905 ************************************ 00:33:54.905 00:33:54.905 real 1m29.870s 00:33:54.905 user 2m5.351s 00:33:54.905 sys 0m23.291s 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:54.905 14:07:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@14 -- # killprocess 74920 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@952 -- # '[' -z 74920 ']' 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@956 -- # kill -0 74920 00:33:54.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74920) - No such process 00:33:54.905 Process with pid 74920 is not found 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 74920 is not found' 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82257 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:54.905 14:07:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82257 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@833 -- # '[' -z 82257 ']' 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:54.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:54.905 14:07:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:54.905 [2024-11-04 14:07:41.710519] Starting SPDK v25.01-pre git sha1 1ca833860 / DPDK 24.03.0 initialization... 00:33:54.905 [2024-11-04 14:07:41.710712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:33:55.164 [2024-11-04 14:07:41.905098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.164 [2024-11-04 14:07:42.021401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.100 14:07:42 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:56.101 14:07:42 ftl -- common/autotest_common.sh@866 -- # return 0 00:33:56.101 14:07:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:56.359 nvme0n1 00:33:56.359 14:07:43 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:56.359 14:07:43 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:56.359 14:07:43 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:56.618 14:07:43 ftl -- ftl/common.sh@28 -- # stores=9bcea4bc-2b5d-43d0-aab5-cf6d2beb5d3b 00:33:56.618 14:07:43 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:56.618 14:07:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9bcea4bc-2b5d-43d0-aab5-cf6d2beb5d3b 00:33:56.876 14:07:43 ftl -- ftl/ftl.sh@23 -- # killprocess 82257 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@952 -- # '[' -z 82257 ']' 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@956 -- # kill -0 82257 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@957 -- # uname 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82257 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:56.876 killing process with pid 82257 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82257' 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@971 -- # kill 82257 00:33:56.876 14:07:43 ftl -- common/autotest_common.sh@976 -- # wait 82257 00:33:59.403 14:07:46 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:59.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:59.660 Waiting for block devices as requested 00:33:59.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:59.919 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:59.919 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:00.177 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:05.463 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:05.463 Remove shared memory files 00:34:05.463 14:07:51 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:05.463 14:07:51 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:05.463 14:07:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:05.463 14:07:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:05.463 14:07:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:05.463 14:07:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:05.463 14:07:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:05.463 ************************************ 00:34:05.463 END TEST ftl 00:34:05.463 ************************************ 00:34:05.463 00:34:05.463 real 10m53.267s 00:34:05.463 user 13m44.246s 00:34:05.463 sys 1m36.193s 00:34:05.463 14:07:51 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:05.463 14:07:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:05.463 14:07:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:05.463 14:07:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:05.463 14:07:52 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:05.463 14:07:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:05.463 14:07:52 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:05.463 14:07:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:05.463 14:07:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:05.463 14:07:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:05.463 14:07:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:05.463 14:07:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:05.463 14:07:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:05.463 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:34:05.463 14:07:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:05.463 14:07:52 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:34:05.463 14:07:52 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:34:05.463 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:34:07.386 INFO: APP EXITING 00:34:07.386 INFO: killing all VMs 00:34:07.386 INFO: killing vhost app 00:34:07.386 INFO: EXIT DONE 00:34:07.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:08.217 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:08.217 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:08.217 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:08.217 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:08.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:09.041 Cleaning 00:34:09.041 Removing: /var/run/dpdk/spdk0/config 00:34:09.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:09.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:09.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:09.041 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:09.041 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:09.041 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:09.041 Removing: /var/run/dpdk/spdk0 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58161 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58407 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58647 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58755 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58818 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58959 00:34:09.041 Removing: /var/run/dpdk/spdk_pid58977 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59187 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59316 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59432 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59559 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59678 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59718 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59754 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59830 00:34:09.041 Removing: /var/run/dpdk/spdk_pid59946 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60413 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60494 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60574 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60600 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60766 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60787 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60952 00:34:09.041 Removing: /var/run/dpdk/spdk_pid60968 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61043 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61072 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61142 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61165 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61377 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61412 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61497 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61702 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61808 00:34:09.041 Removing: /var/run/dpdk/spdk_pid61850 00:34:09.041 Removing: /var/run/dpdk/spdk_pid62336 00:34:09.041 Removing: /var/run/dpdk/spdk_pid62440 00:34:09.041 Removing: /var/run/dpdk/spdk_pid62560 00:34:09.041 Removing: /var/run/dpdk/spdk_pid62624 00:34:09.041 Removing: /var/run/dpdk/spdk_pid62655 00:34:09.299 Removing: /var/run/dpdk/spdk_pid62739 00:34:09.299 Removing: /var/run/dpdk/spdk_pid63387 00:34:09.299 Removing: /var/run/dpdk/spdk_pid63436 00:34:09.299 Removing: /var/run/dpdk/spdk_pid63959 00:34:09.299 Removing: /var/run/dpdk/spdk_pid64068 00:34:09.299 Removing: /var/run/dpdk/spdk_pid64183 00:34:09.299 Removing: /var/run/dpdk/spdk_pid64236 00:34:09.299 Removing: /var/run/dpdk/spdk_pid64267 00:34:09.299 Removing: /var/run/dpdk/spdk_pid64298 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66198 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66353 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66357 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66380 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66419 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66423 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66435 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66485 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66489 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66501 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66551 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66555 00:34:09.299 Removing: /var/run/dpdk/spdk_pid66567 00:34:09.299 Removing: /var/run/dpdk/spdk_pid67948 00:34:09.299 Removing: /var/run/dpdk/spdk_pid68067 00:34:09.299 Removing: /var/run/dpdk/spdk_pid69495 00:34:09.299 Removing: /var/run/dpdk/spdk_pid70880 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71006 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71133 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71255 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71398 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71482 00:34:09.299 Removing: /var/run/dpdk/spdk_pid71631 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72016 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72058 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72560 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72747 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72860 00:34:09.299 Removing: /var/run/dpdk/spdk_pid72973 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73034 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73065 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73359 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73431 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73529 00:34:09.299 Removing: /var/run/dpdk/spdk_pid73972 00:34:09.299 Removing: /var/run/dpdk/spdk_pid74118 00:34:09.299 Removing: /var/run/dpdk/spdk_pid74920 00:34:09.299 Removing: /var/run/dpdk/spdk_pid75080 00:34:09.299 Removing: /var/run/dpdk/spdk_pid75299 00:34:09.299 Removing: /var/run/dpdk/spdk_pid75414 00:34:09.299 Removing: /var/run/dpdk/spdk_pid75792 00:34:09.299 Removing: /var/run/dpdk/spdk_pid76080 00:34:09.299 Removing: /var/run/dpdk/spdk_pid76448 00:34:09.299 Removing: /var/run/dpdk/spdk_pid76669 00:34:09.299 Removing: /var/run/dpdk/spdk_pid76793 00:34:09.299 Removing: /var/run/dpdk/spdk_pid76873 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77001 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77040 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77121 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77325 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77612 00:34:09.299 Removing: /var/run/dpdk/spdk_pid77950 00:34:09.299 Removing: /var/run/dpdk/spdk_pid78325 00:34:09.299 Removing: /var/run/dpdk/spdk_pid78672 00:34:09.299 Removing: /var/run/dpdk/spdk_pid79128 00:34:09.299 Removing: /var/run/dpdk/spdk_pid79276 00:34:09.299 Removing: /var/run/dpdk/spdk_pid79380 00:34:09.299 Removing: /var/run/dpdk/spdk_pid79977 00:34:09.299 Removing: /var/run/dpdk/spdk_pid80054 00:34:09.299 Removing: /var/run/dpdk/spdk_pid80434 00:34:09.299 Removing: /var/run/dpdk/spdk_pid80771 00:34:09.299 Removing: /var/run/dpdk/spdk_pid81215 00:34:09.299 Removing: /var/run/dpdk/spdk_pid81332 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81390 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81460 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81516 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81590 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81801 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81871 00:34:09.557 Removing: /var/run/dpdk/spdk_pid81938 00:34:09.557 Removing: /var/run/dpdk/spdk_pid82005 00:34:09.557 Removing: /var/run/dpdk/spdk_pid82040 00:34:09.557 Removing: /var/run/dpdk/spdk_pid82119 00:34:09.557 Removing: /var/run/dpdk/spdk_pid82257 00:34:09.557 Clean 00:34:09.557 14:07:56 -- common/autotest_common.sh@1451 -- # return 0 00:34:09.557 14:07:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:09.557 14:07:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.557 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:34:09.557 14:07:56 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:09.557 14:07:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.557 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:34:09.557 14:07:56 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:09.557 14:07:56 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:09.557 14:07:56 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:09.557 14:07:56 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:09.557 14:07:56 -- spdk/autotest.sh@394 -- # hostname 00:34:09.557 14:07:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:09.815 geninfo: WARNING: invalid characters removed from testname! 00:34:36.389 14:08:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:37.324 14:08:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:39.859 14:08:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:41.767 14:08:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:44.296 14:08:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:46.200 14:08:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:48.475 14:08:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:48.475 14:08:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:48.475 14:08:35 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:48.475 14:08:35 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:48.475 14:08:35 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:48.475 14:08:35 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:48.475 + [[ -n 5303 ]] 00:34:48.475 + sudo kill 5303 00:34:48.484 [Pipeline] } 00:34:48.499 [Pipeline] // timeout 00:34:48.505 [Pipeline] } 00:34:48.519 [Pipeline] // stage 00:34:48.525 [Pipeline] } 00:34:48.541 [Pipeline] // catchError 00:34:48.552 [Pipeline] stage 00:34:48.555 [Pipeline] { (Stop VM) 00:34:48.567 [Pipeline] sh 00:34:48.847 + vagrant halt 00:34:52.135 ==> default: Halting domain... 00:34:58.738 [Pipeline] sh 00:34:59.018 + vagrant destroy -f 00:35:02.335 ==> default: Removing domain... 00:35:03.296 [Pipeline] sh 00:35:03.576 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:03.585 [Pipeline] } 00:35:03.607 [Pipeline] // stage 00:35:03.613 [Pipeline] } 00:35:03.628 [Pipeline] // dir 00:35:03.634 [Pipeline] } 00:35:03.649 [Pipeline] // wrap 00:35:03.655 [Pipeline] } 00:35:03.676 [Pipeline] // catchError 00:35:03.685 [Pipeline] stage 00:35:03.687 [Pipeline] { (Epilogue) 00:35:03.700 [Pipeline] sh 00:35:03.983 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:10.605 [Pipeline] catchError 00:35:10.607 [Pipeline] { 00:35:10.619 [Pipeline] sh 00:35:10.901 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:11.168 Artifacts sizes are good 00:35:11.227 [Pipeline] } 00:35:11.242 [Pipeline] // catchError 00:35:11.254 [Pipeline] archiveArtifacts 00:35:11.261 Archiving artifacts 00:35:11.372 [Pipeline] cleanWs 00:35:11.384 [WS-CLEANUP] Deleting project workspace... 00:35:11.384 [WS-CLEANUP] Deferred wipeout is used... 00:35:11.390 [WS-CLEANUP] done 00:35:11.392 [Pipeline] } 00:35:11.407 [Pipeline] // stage 00:35:11.415 [Pipeline] } 00:35:11.429 [Pipeline] // node 00:35:11.434 [Pipeline] End of Pipeline 00:35:11.470 Finished: SUCCESS