00:00:00.000 Started by upstream project "autotest-per-patch" build number 131248 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.068 Using shallow fetch with depth 1 00:00:00.068 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.068 > git --version # timeout=10 00:00:00.084 > git --version # 'git version 2.39.2' 00:00:00.084 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.101 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.101 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.117 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.129 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.145 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:04.145 > git config core.sparsecheckout # timeout=10 00:00:04.159 > git read-tree -mu HEAD # timeout=10 00:00:04.175 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:04.195 Commit message: "packer: Fix typo in a package name" 00:00:04.195 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:04.309 [Pipeline] Start of Pipeline 00:00:04.320 [Pipeline] library 00:00:04.321 Loading library shm_lib@master 00:00:04.321 Library shm_lib@master is cached. Copying from home. 00:00:04.334 [Pipeline] node 00:00:04.345 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:04.346 [Pipeline] { 00:00:04.356 [Pipeline] catchError 00:00:04.358 [Pipeline] { 00:00:04.367 [Pipeline] wrap 00:00:04.373 [Pipeline] { 00:00:04.379 [Pipeline] stage 00:00:04.380 [Pipeline] { (Prologue) 00:00:04.393 [Pipeline] echo 00:00:04.395 Node: VM-host-SM38 00:00:04.399 [Pipeline] cleanWs 00:00:04.409 [WS-CLEANUP] Deleting project workspace... 00:00:04.409 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.417 [WS-CLEANUP] done 00:00:04.625 [Pipeline] setCustomBuildProperty 00:00:04.694 [Pipeline] httpRequest 00:00:05.065 [Pipeline] echo 00:00:05.066 Sorcerer 10.211.164.101 is alive 00:00:05.074 [Pipeline] retry 00:00:05.075 [Pipeline] { 00:00:05.087 [Pipeline] httpRequest 00:00:05.091 HttpMethod: GET 00:00:05.092 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.092 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:05.117 Response Code: HTTP/1.1 200 OK 00:00:05.117 Success: Status code 200 is in the accepted range: 200,404 00:00:05.118 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:16.481 [Pipeline] } 00:00:16.498 [Pipeline] // retry 00:00:16.505 [Pipeline] sh 00:00:16.789 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:16.805 [Pipeline] httpRequest 00:00:17.382 [Pipeline] echo 00:00:17.383 Sorcerer 10.211.164.101 is alive 00:00:17.393 [Pipeline] retry 00:00:17.395 [Pipeline] { 00:00:17.408 [Pipeline] httpRequest 00:00:17.412 HttpMethod: GET 00:00:17.413 URL: http://10.211.164.101/packages/spdk_2a2bf59c26d6f7717b2ae6fe94d9a8523f51a175.tar.gz 00:00:17.413 Sending request to url: http://10.211.164.101/packages/spdk_2a2bf59c26d6f7717b2ae6fe94d9a8523f51a175.tar.gz 00:00:17.433 Response Code: HTTP/1.1 200 OK 00:00:17.433 Success: Status code 200 is in the accepted range: 200,404 00:00:17.434 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_2a2bf59c26d6f7717b2ae6fe94d9a8523f51a175.tar.gz 00:02:02.456 [Pipeline] } 00:02:02.471 [Pipeline] // retry 00:02:02.477 [Pipeline] sh 00:02:02.760 + tar --no-same-owner -xf spdk_2a2bf59c26d6f7717b2ae6fe94d9a8523f51a175.tar.gz 00:02:06.074 [Pipeline] sh 00:02:06.359 + git -C spdk log --oneline -n5 00:02:06.359 2a2bf59c2 nvmf: add function for setting ns visibility 00:02:06.359 ffd9f7465 bdev/nvme: Fix crash due to NULL io_path 00:02:06.359 ee513ce4a lib/reduce: If init fails, unlink meta file 00:02:06.359 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:02:06.359 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:02:06.378 [Pipeline] writeFile 00:02:06.393 [Pipeline] sh 00:02:06.680 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:06.694 [Pipeline] sh 00:02:06.980 + cat autorun-spdk.conf 00:02:06.980 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.980 SPDK_TEST_NVME=1 00:02:06.980 SPDK_TEST_FTL=1 00:02:06.980 SPDK_TEST_ISAL=1 00:02:06.980 SPDK_RUN_ASAN=1 00:02:06.980 SPDK_RUN_UBSAN=1 00:02:06.980 SPDK_TEST_XNVME=1 00:02:06.980 SPDK_TEST_NVME_FDP=1 00:02:06.980 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.989 RUN_NIGHTLY=0 00:02:06.991 [Pipeline] } 00:02:07.035 [Pipeline] // stage 00:02:07.048 [Pipeline] stage 00:02:07.049 [Pipeline] { (Run VM) 00:02:07.056 [Pipeline] sh 00:02:07.337 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:07.337 + echo 'Start stage prepare_nvme.sh' 00:02:07.337 Start stage prepare_nvme.sh 00:02:07.337 + [[ -n 1 ]] 00:02:07.337 + disk_prefix=ex1 00:02:07.337 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:02:07.337 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:02:07.337 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:02:07.337 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.337 ++ SPDK_TEST_NVME=1 00:02:07.337 ++ SPDK_TEST_FTL=1 00:02:07.337 ++ SPDK_TEST_ISAL=1 00:02:07.337 ++ SPDK_RUN_ASAN=1 00:02:07.337 ++ SPDK_RUN_UBSAN=1 00:02:07.337 ++ SPDK_TEST_XNVME=1 00:02:07.337 ++ SPDK_TEST_NVME_FDP=1 00:02:07.337 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.337 ++ RUN_NIGHTLY=0 00:02:07.337 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:02:07.337 + nvme_files=() 00:02:07.337 + declare -A nvme_files 00:02:07.337 + backend_dir=/var/lib/libvirt/images/backends 00:02:07.337 + nvme_files['nvme.img']=5G 00:02:07.337 + nvme_files['nvme-cmb.img']=5G 00:02:07.337 + nvme_files['nvme-multi0.img']=4G 00:02:07.337 + nvme_files['nvme-multi1.img']=4G 00:02:07.337 + nvme_files['nvme-multi2.img']=4G 00:02:07.337 + nvme_files['nvme-openstack.img']=8G 00:02:07.337 + nvme_files['nvme-zns.img']=5G 00:02:07.337 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:07.337 + (( SPDK_TEST_FTL == 1 )) 00:02:07.337 + nvme_files["nvme-ftl.img"]=6G 00:02:07.337 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:07.337 + nvme_files["nvme-fdp.img"]=1G 00:02:07.337 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:07.337 + for nvme in "${!nvme_files[@]}" 00:02:07.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:07.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.337 + for nvme in "${!nvme_files[@]}" 00:02:07.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:07.909 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:07.909 + for nvme in "${!nvme_files[@]}" 00:02:07.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:07.910 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.910 + for nvme in "${!nvme_files[@]}" 00:02:07.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:08.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:08.171 + for nvme in "${!nvme_files[@]}" 00:02:08.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:08.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.171 + for nvme in "${!nvme_files[@]}" 00:02:08.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:08.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.171 + for nvme in "${!nvme_files[@]}" 00:02:08.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:08.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.171 + for nvme in "${!nvme_files[@]}" 00:02:08.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:08.171 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:08.171 + for nvme in "${!nvme_files[@]}" 00:02:08.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:08.432 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.432 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:08.432 + echo 'End stage prepare_nvme.sh' 00:02:08.432 End stage prepare_nvme.sh 00:02:08.444 [Pipeline] sh 00:02:08.729 + DISTRO=fedora39 00:02:08.729 + CPUS=10 00:02:08.729 + RAM=12288 00:02:08.729 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:08.729 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:08.729 00:02:08.729 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:02:08.729 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:02:08.729 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:02:08.729 HELP=0 00:02:08.729 DRY_RUN=0 00:02:08.729 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:08.729 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:08.729 NVME_AUTO_CREATE=0 00:02:08.729 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:08.729 NVME_CMB=,,,, 00:02:08.729 NVME_PMR=,,,, 00:02:08.729 NVME_ZNS=,,,, 00:02:08.729 NVME_MS=true,,,, 00:02:08.729 NVME_FDP=,,,on, 00:02:08.729 SPDK_VAGRANT_DISTRO=fedora39 00:02:08.729 SPDK_VAGRANT_VMCPU=10 00:02:08.729 SPDK_VAGRANT_VMRAM=12288 00:02:08.729 SPDK_VAGRANT_PROVIDER=libvirt 00:02:08.729 SPDK_VAGRANT_HTTP_PROXY= 00:02:08.729 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:08.729 SPDK_OPENSTACK_NETWORK=0 00:02:08.729 VAGRANT_PACKAGE_BOX=0 00:02:08.729 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:08.729 FORCE_DISTRO=true 00:02:08.729 VAGRANT_BOX_VERSION= 00:02:08.729 EXTRA_VAGRANTFILES= 00:02:08.729 NIC_MODEL=e1000 00:02:08.729 00:02:08.729 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:02:08.729 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:02:11.276 Bringing machine 'default' up with 'libvirt' provider... 00:02:11.846 ==> default: Creating image (snapshot of base box volume). 00:02:12.105 ==> default: Creating domain with the following settings... 00:02:12.105 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729159274_75d904746d8790d4aa12 00:02:12.105 ==> default: -- Domain type: kvm 00:02:12.105 ==> default: -- Cpus: 10 00:02:12.105 ==> default: -- Feature: acpi 00:02:12.105 ==> default: -- Feature: apic 00:02:12.105 ==> default: -- Feature: pae 00:02:12.105 ==> default: -- Memory: 12288M 00:02:12.105 ==> default: -- Memory Backing: hugepages: 00:02:12.105 ==> default: -- Management MAC: 00:02:12.105 ==> default: -- Loader: 00:02:12.105 ==> default: -- Nvram: 00:02:12.105 ==> default: -- Base box: spdk/fedora39 00:02:12.105 ==> default: -- Storage pool: default 00:02:12.105 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729159274_75d904746d8790d4aa12.img (20G) 00:02:12.105 ==> default: -- Volume Cache: default 00:02:12.105 ==> default: -- Kernel: 00:02:12.105 ==> default: -- Initrd: 00:02:12.105 ==> default: -- Graphics Type: vnc 00:02:12.105 ==> default: -- Graphics Port: -1 00:02:12.105 ==> default: -- Graphics IP: 127.0.0.1 00:02:12.105 ==> default: -- Graphics Password: Not defined 00:02:12.105 ==> default: -- Video Type: cirrus 00:02:12.105 ==> default: -- Video VRAM: 9216 00:02:12.105 ==> default: -- Sound Type: 00:02:12.105 ==> default: -- Keymap: en-us 00:02:12.105 ==> default: -- TPM Path: 00:02:12.105 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:12.105 ==> default: -- Command line args: 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:12.105 ==> default: -> value=-drive, 00:02:12.105 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:12.105 ==> default: -> value=-device, 00:02:12.105 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.105 ==> default: Creating shared folders metadata... 00:02:12.105 ==> default: Starting domain. 00:02:14.019 ==> default: Waiting for domain to get an IP address... 00:02:32.277 ==> default: Waiting for SSH to become available... 00:02:32.277 ==> default: Configuring and enabling network interfaces... 00:02:36.486 default: SSH address: 192.168.121.146:22 00:02:36.486 default: SSH username: vagrant 00:02:36.486 default: SSH auth method: private key 00:02:38.445 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:46.587 ==> default: Mounting SSHFS shared folder... 00:02:49.137 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:49.137 ==> default: Checking Mount.. 00:02:50.080 ==> default: Folder Successfully Mounted! 00:02:50.080 00:02:50.080 SUCCESS! 00:02:50.080 00:02:50.080 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:50.080 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:50.080 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:50.080 00:02:50.091 [Pipeline] } 00:02:50.107 [Pipeline] // stage 00:02:50.117 [Pipeline] dir 00:02:50.118 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:02:50.120 [Pipeline] { 00:02:50.135 [Pipeline] catchError 00:02:50.137 [Pipeline] { 00:02:50.153 [Pipeline] sh 00:02:50.441 + vagrant ssh-config --host vagrant 00:02:50.441 + sed -ne '/^Host/,$p' 00:02:50.441 + tee ssh_conf 00:02:53.739 Host vagrant 00:02:53.739 HostName 192.168.121.146 00:02:53.739 User vagrant 00:02:53.739 Port 22 00:02:53.739 UserKnownHostsFile /dev/null 00:02:53.739 StrictHostKeyChecking no 00:02:53.739 PasswordAuthentication no 00:02:53.739 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:53.739 IdentitiesOnly yes 00:02:53.739 LogLevel FATAL 00:02:53.739 ForwardAgent yes 00:02:53.739 ForwardX11 yes 00:02:53.739 00:02:53.752 [Pipeline] withEnv 00:02:53.754 [Pipeline] { 00:02:53.767 [Pipeline] sh 00:02:54.042 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:54.043 source /etc/os-release 00:02:54.043 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.043 # Minimal, systemd-like check. 00:02:54.043 if [[ -e /.dockerenv ]]; then 00:02:54.043 # Clear garbage from the node'\''s name: 00:02:54.043 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.043 # $HOSTNAME is the actual container id 00:02:54.043 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.043 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.043 # We can assume this is a mount from a host where container is running, 00:02:54.043 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.043 container="$(< /etc/hostname) ($agent)" 00:02:54.043 else 00:02:54.043 # Fallback 00:02:54.043 container=$agent 00:02:54.043 fi 00:02:54.043 fi 00:02:54.043 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.043 ' 00:02:54.053 [Pipeline] } 00:02:54.072 [Pipeline] // withEnv 00:02:54.080 [Pipeline] setCustomBuildProperty 00:02:54.095 [Pipeline] stage 00:02:54.098 [Pipeline] { (Tests) 00:02:54.114 [Pipeline] sh 00:02:54.391 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.661 [Pipeline] sh 00:02:55.004 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:55.018 [Pipeline] timeout 00:02:55.019 Timeout set to expire in 50 min 00:02:55.021 [Pipeline] { 00:02:55.034 [Pipeline] sh 00:02:55.311 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:55.876 HEAD is now at 2a2bf59c2 nvmf: add function for setting ns visibility 00:02:55.888 [Pipeline] sh 00:02:56.163 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:56.435 [Pipeline] sh 00:02:56.732 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:57.005 [Pipeline] sh 00:02:57.281 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:57.540 ++ readlink -f spdk_repo 00:02:57.540 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.540 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.540 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.540 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.540 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.540 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.540 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.540 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:57.540 + cd /home/vagrant/spdk_repo 00:02:57.540 + source /etc/os-release 00:02:57.540 ++ NAME='Fedora Linux' 00:02:57.540 ++ VERSION='39 (Cloud Edition)' 00:02:57.540 ++ ID=fedora 00:02:57.540 ++ VERSION_ID=39 00:02:57.540 ++ VERSION_CODENAME= 00:02:57.540 ++ PLATFORM_ID=platform:f39 00:02:57.540 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:57.540 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.540 ++ LOGO=fedora-logo-icon 00:02:57.540 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:57.540 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.540 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:57.540 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.540 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.540 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.540 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:57.540 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.540 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:57.540 ++ SUPPORT_END=2024-11-12 00:02:57.540 ++ VARIANT='Cloud Edition' 00:02:57.540 ++ VARIANT_ID=cloud 00:02:57.540 + uname -a 00:02:57.540 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:57.540 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:58.058 Hugepages 00:02:58.058 node hugesize free / total 00:02:58.058 node0 1048576kB 0 / 0 00:02:58.058 node0 2048kB 0 / 0 00:02:58.058 00:02:58.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.058 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:58.058 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:58.058 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:58.058 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:58.058 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:58.058 + rm -f /tmp/spdk-ld-path 00:02:58.058 + source autorun-spdk.conf 00:02:58.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.058 ++ SPDK_TEST_NVME=1 00:02:58.058 ++ SPDK_TEST_FTL=1 00:02:58.058 ++ SPDK_TEST_ISAL=1 00:02:58.058 ++ SPDK_RUN_ASAN=1 00:02:58.058 ++ SPDK_RUN_UBSAN=1 00:02:58.058 ++ SPDK_TEST_XNVME=1 00:02:58.058 ++ SPDK_TEST_NVME_FDP=1 00:02:58.058 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.058 ++ RUN_NIGHTLY=0 00:02:58.058 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:58.058 + [[ -n '' ]] 00:02:58.058 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:58.316 + for M in /var/spdk/build-*-manifest.txt 00:02:58.316 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:58.316 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.316 + for M in /var/spdk/build-*-manifest.txt 00:02:58.316 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:58.316 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.316 + for M in /var/spdk/build-*-manifest.txt 00:02:58.316 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:58.316 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.316 ++ uname 00:02:58.316 + [[ Linux == \L\i\n\u\x ]] 00:02:58.316 + sudo dmesg -T 00:02:58.316 + sudo dmesg --clear 00:02:58.316 + dmesg_pid=5031 00:02:58.316 + [[ Fedora Linux == FreeBSD ]] 00:02:58.316 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.316 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.316 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:58.316 + [[ -x /usr/src/fio-static/fio ]] 00:02:58.316 + sudo dmesg -Tw 00:02:58.573 + export FIO_BIN=/usr/src/fio-static/fio 00:02:58.573 + FIO_BIN=/usr/src/fio-static/fio 00:02:58.573 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:58.573 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:58.573 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:58.573 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.573 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.573 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:58.573 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.573 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.573 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:58.573 Test configuration: 00:02:58.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.573 SPDK_TEST_NVME=1 00:02:58.573 SPDK_TEST_FTL=1 00:02:58.573 SPDK_TEST_ISAL=1 00:02:58.573 SPDK_RUN_ASAN=1 00:02:58.573 SPDK_RUN_UBSAN=1 00:02:58.573 SPDK_TEST_XNVME=1 00:02:58.573 SPDK_TEST_NVME_FDP=1 00:02:58.573 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.573 RUN_NIGHTLY=0 10:02:01 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:58.573 10:02:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.573 10:02:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:58.573 10:02:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.573 10:02:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.573 10:02:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.573 10:02:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.573 10:02:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.573 10:02:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.573 10:02:01 -- paths/export.sh@5 -- $ export PATH 00:02:58.573 10:02:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.573 10:02:01 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.573 10:02:01 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:58.573 10:02:01 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729159321.XXXXXX 00:02:58.573 10:02:01 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729159321.KfNi4O 00:02:58.573 10:02:01 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:58.573 10:02:01 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:58.573 10:02:01 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:58.573 10:02:01 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:58.573 10:02:01 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.574 10:02:01 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:58.574 10:02:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:58.574 10:02:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.574 10:02:01 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:58.574 10:02:01 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:58.574 10:02:01 -- pm/common@17 -- $ local monitor 00:02:58.574 10:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.574 10:02:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.574 10:02:01 -- pm/common@25 -- $ sleep 1 00:02:58.574 10:02:01 -- pm/common@21 -- $ date +%s 00:02:58.574 10:02:01 -- pm/common@21 -- $ date +%s 00:02:58.574 10:02:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729159321 00:02:58.574 10:02:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729159321 00:02:58.574 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729159321_collect-cpu-load.pm.log 00:02:58.574 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729159321_collect-vmstat.pm.log 00:02:59.953 10:02:02 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:59.953 10:02:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.953 10:02:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.953 10:02:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.953 10:02:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.953 Thu Oct 17 10:02:02 AM UTC 2024 00:02:59.953 10:02:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.953 v25.01-pre-73-g2a2bf59c2 00:02:59.953 10:02:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.953 10:02:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.953 10:02:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.953 10:02:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.953 10:02:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.953 ************************************ 00:02:59.953 START TEST asan 00:02:59.953 ************************************ 00:02:59.953 using asan 00:02:59.953 10:02:02 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:59.953 00:02:59.953 real 0m0.000s 00:02:59.953 user 0m0.000s 00:02:59.953 sys 0m0.000s 00:02:59.953 10:02:02 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.953 ************************************ 00:02:59.953 END TEST asan 00:02:59.953 ************************************ 00:02:59.953 10:02:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.953 10:02:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.953 10:02:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.953 10:02:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.953 10:02:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.953 10:02:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.953 ************************************ 00:02:59.953 START TEST ubsan 00:02:59.953 ************************************ 00:02:59.953 using ubsan 00:02:59.953 10:02:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:59.953 00:02:59.953 real 0m0.000s 00:02:59.953 user 0m0.000s 00:02:59.953 sys 0m0.000s 00:02:59.953 10:02:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.953 ************************************ 00:02:59.953 END TEST ubsan 00:02:59.953 ************************************ 00:02:59.953 10:02:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.953 10:02:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.953 10:02:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.953 10:02:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.953 10:02:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.953 10:02:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.953 10:02:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.953 10:02:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.953 10:02:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.954 10:02:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:59.954 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.954 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:00.215 Using 'verbs' RDMA provider 00:03:13.386 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:23.416 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:23.675 Creating mk/config.mk...done. 00:03:23.675 Creating mk/cc.flags.mk...done. 00:03:23.675 Type 'make' to build. 00:03:23.675 10:02:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:23.675 10:02:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:23.675 10:02:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:23.675 10:02:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.675 ************************************ 00:03:23.675 START TEST make 00:03:23.675 ************************************ 00:03:23.675 10:02:26 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:23.933 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:23.933 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:23.933 meson setup builddir \ 00:03:23.933 -Dwith-libaio=enabled \ 00:03:23.933 -Dwith-liburing=enabled \ 00:03:23.933 -Dwith-libvfn=disabled \ 00:03:23.933 -Dwith-spdk=disabled \ 00:03:23.933 -Dexamples=false \ 00:03:23.933 -Dtests=false \ 00:03:23.933 -Dtools=false && \ 00:03:23.933 meson compile -C builddir && \ 00:03:23.933 cd -) 00:03:23.933 make[1]: Nothing to be done for 'all'. 00:03:25.845 The Meson build system 00:03:25.845 Version: 1.5.0 00:03:25.845 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:25.845 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:25.845 Build type: native build 00:03:25.845 Project name: xnvme 00:03:25.845 Project version: 0.7.5 00:03:25.845 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:25.845 C linker for the host machine: cc ld.bfd 2.40-14 00:03:25.845 Host machine cpu family: x86_64 00:03:25.845 Host machine cpu: x86_64 00:03:25.845 Message: host_machine.system: linux 00:03:25.845 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:25.845 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:25.845 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:25.845 Run-time dependency threads found: YES 00:03:25.845 Has header "setupapi.h" : NO 00:03:25.845 Has header "linux/blkzoned.h" : YES 00:03:25.845 Has header "linux/blkzoned.h" : YES (cached) 00:03:25.845 Has header "libaio.h" : YES 00:03:25.845 Library aio found: YES 00:03:25.845 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:25.845 Run-time dependency liburing found: YES 2.2 00:03:25.845 Dependency libvfn skipped: feature with-libvfn disabled 00:03:25.845 Found CMake: /usr/bin/cmake (3.27.7) 00:03:25.845 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:25.845 Subproject spdk : skipped: feature with-spdk disabled 00:03:25.845 Run-time dependency appleframeworks found: NO (tried framework) 00:03:25.845 Run-time dependency appleframeworks found: NO (tried framework) 00:03:25.845 Library rt found: YES 00:03:25.845 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:25.845 Configuring xnvme_config.h using configuration 00:03:25.845 Configuring xnvme.spec using configuration 00:03:25.845 Run-time dependency bash-completion found: YES 2.11 00:03:25.845 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:25.845 Program cp found: YES (/usr/bin/cp) 00:03:25.845 Build targets in project: 3 00:03:25.845 00:03:25.845 xnvme 0.7.5 00:03:25.845 00:03:25.845 Subprojects 00:03:25.845 spdk : NO Feature 'with-spdk' disabled 00:03:25.845 00:03:25.845 User defined options 00:03:25.845 examples : false 00:03:25.845 tests : false 00:03:25.845 tools : false 00:03:25.845 with-libaio : enabled 00:03:25.845 with-liburing: enabled 00:03:25.845 with-libvfn : disabled 00:03:25.845 with-spdk : disabled 00:03:25.845 00:03:25.845 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:26.104 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:26.361 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:26.361 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:26.361 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:26.361 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:26.361 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:26.361 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:26.361 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:26.361 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:26.361 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:26.361 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:26.361 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:26.361 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:26.361 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:26.361 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:26.361 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:26.361 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:26.361 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:26.361 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:26.361 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:26.618 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:26.618 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:26.618 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:26.618 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:26.618 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:26.618 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:26.618 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:26.619 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:26.619 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:26.619 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:26.619 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:26.619 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:26.619 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:26.619 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:26.619 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:26.619 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:26.619 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:26.619 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:26.619 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:26.619 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:26.619 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:26.619 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:26.619 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:26.619 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:26.619 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:26.619 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:26.619 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:26.619 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:26.619 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:26.619 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:26.619 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:26.619 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:26.619 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:26.619 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:26.876 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:26.876 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:26.876 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:26.876 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:26.876 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:26.876 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:26.876 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:26.876 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:26.876 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:26.876 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:26.876 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:26.876 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:26.876 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:26.876 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:26.876 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:26.876 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:26.876 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:26.876 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:27.156 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:27.156 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:27.415 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:27.415 [75/76] Linking static target lib/libxnvme.a 00:03:27.415 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:27.415 INFO: autodetecting backend as ninja 00:03:27.415 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:27.674 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:34.233 The Meson build system 00:03:34.233 Version: 1.5.0 00:03:34.233 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:34.233 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:34.233 Build type: native build 00:03:34.233 Program cat found: YES (/usr/bin/cat) 00:03:34.233 Project name: DPDK 00:03:34.233 Project version: 24.03.0 00:03:34.233 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.233 C linker for the host machine: cc ld.bfd 2.40-14 00:03:34.233 Host machine cpu family: x86_64 00:03:34.233 Host machine cpu: x86_64 00:03:34.233 Message: ## Building in Developer Mode ## 00:03:34.233 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:34.233 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:34.233 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:34.233 Program python3 found: YES (/usr/bin/python3) 00:03:34.233 Program cat found: YES (/usr/bin/cat) 00:03:34.233 Compiler for C supports arguments -march=native: YES 00:03:34.233 Checking for size of "void *" : 8 00:03:34.233 Checking for size of "void *" : 8 (cached) 00:03:34.233 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:34.233 Library m found: YES 00:03:34.233 Library numa found: YES 00:03:34.233 Has header "numaif.h" : YES 00:03:34.233 Library fdt found: NO 00:03:34.233 Library execinfo found: NO 00:03:34.233 Has header "execinfo.h" : YES 00:03:34.233 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.233 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:34.233 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:34.233 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:34.233 Run-time dependency openssl found: YES 3.1.1 00:03:34.233 Run-time dependency libpcap found: YES 1.10.4 00:03:34.233 Has header "pcap.h" with dependency libpcap: YES 00:03:34.233 Compiler for C supports arguments -Wcast-qual: YES 00:03:34.233 Compiler for C supports arguments -Wdeprecated: YES 00:03:34.233 Compiler for C supports arguments -Wformat: YES 00:03:34.233 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:34.233 Compiler for C supports arguments -Wformat-security: NO 00:03:34.233 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.233 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:34.233 Compiler for C supports arguments -Wnested-externs: YES 00:03:34.233 Compiler for C supports arguments -Wold-style-definition: YES 00:03:34.233 Compiler for C supports arguments -Wpointer-arith: YES 00:03:34.233 Compiler for C supports arguments -Wsign-compare: YES 00:03:34.233 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:34.233 Compiler for C supports arguments -Wundef: YES 00:03:34.233 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.233 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:34.233 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:34.233 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.233 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:34.233 Program objdump found: YES (/usr/bin/objdump) 00:03:34.233 Compiler for C supports arguments -mavx512f: YES 00:03:34.233 Checking if "AVX512 checking" compiles: YES 00:03:34.233 Fetching value of define "__SSE4_2__" : 1 00:03:34.234 Fetching value of define "__AES__" : 1 00:03:34.234 Fetching value of define "__AVX__" : 1 00:03:34.234 Fetching value of define "__AVX2__" : 1 00:03:34.234 Fetching value of define "__AVX512BW__" : 1 00:03:34.234 Fetching value of define "__AVX512CD__" : 1 00:03:34.234 Fetching value of define "__AVX512DQ__" : 1 00:03:34.234 Fetching value of define "__AVX512F__" : 1 00:03:34.234 Fetching value of define "__AVX512VL__" : 1 00:03:34.234 Fetching value of define "__PCLMUL__" : 1 00:03:34.234 Fetching value of define "__RDRND__" : 1 00:03:34.234 Fetching value of define "__RDSEED__" : 1 00:03:34.234 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:34.234 Fetching value of define "__znver1__" : (undefined) 00:03:34.234 Fetching value of define "__znver2__" : (undefined) 00:03:34.234 Fetching value of define "__znver3__" : (undefined) 00:03:34.234 Fetching value of define "__znver4__" : (undefined) 00:03:34.234 Library asan found: YES 00:03:34.234 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:34.234 Message: lib/log: Defining dependency "log" 00:03:34.234 Message: lib/kvargs: Defining dependency "kvargs" 00:03:34.234 Message: lib/telemetry: Defining dependency "telemetry" 00:03:34.234 Library rt found: YES 00:03:34.234 Checking for function "getentropy" : NO 00:03:34.234 Message: lib/eal: Defining dependency "eal" 00:03:34.234 Message: lib/ring: Defining dependency "ring" 00:03:34.234 Message: lib/rcu: Defining dependency "rcu" 00:03:34.234 Message: lib/mempool: Defining dependency "mempool" 00:03:34.234 Message: lib/mbuf: Defining dependency "mbuf" 00:03:34.234 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:34.234 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:34.234 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:34.234 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:34.234 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:34.234 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:34.234 Compiler for C supports arguments -mpclmul: YES 00:03:34.234 Compiler for C supports arguments -maes: YES 00:03:34.234 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:34.234 Compiler for C supports arguments -mavx512bw: YES 00:03:34.234 Compiler for C supports arguments -mavx512dq: YES 00:03:34.234 Compiler for C supports arguments -mavx512vl: YES 00:03:34.234 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:34.234 Compiler for C supports arguments -mavx2: YES 00:03:34.234 Compiler for C supports arguments -mavx: YES 00:03:34.234 Message: lib/net: Defining dependency "net" 00:03:34.234 Message: lib/meter: Defining dependency "meter" 00:03:34.234 Message: lib/ethdev: Defining dependency "ethdev" 00:03:34.234 Message: lib/pci: Defining dependency "pci" 00:03:34.234 Message: lib/cmdline: Defining dependency "cmdline" 00:03:34.234 Message: lib/hash: Defining dependency "hash" 00:03:34.234 Message: lib/timer: Defining dependency "timer" 00:03:34.234 Message: lib/compressdev: Defining dependency "compressdev" 00:03:34.234 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:34.234 Message: lib/dmadev: Defining dependency "dmadev" 00:03:34.234 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:34.234 Message: lib/power: Defining dependency "power" 00:03:34.234 Message: lib/reorder: Defining dependency "reorder" 00:03:34.234 Message: lib/security: Defining dependency "security" 00:03:34.234 Has header "linux/userfaultfd.h" : YES 00:03:34.234 Has header "linux/vduse.h" : YES 00:03:34.234 Message: lib/vhost: Defining dependency "vhost" 00:03:34.234 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:34.234 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:34.234 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:34.234 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:34.234 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:34.234 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:34.234 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:34.234 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:34.234 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:34.234 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:34.234 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:34.234 Configuring doxy-api-html.conf using configuration 00:03:34.234 Configuring doxy-api-man.conf using configuration 00:03:34.234 Program mandb found: YES (/usr/bin/mandb) 00:03:34.234 Program sphinx-build found: NO 00:03:34.234 Configuring rte_build_config.h using configuration 00:03:34.234 Message: 00:03:34.234 ================= 00:03:34.234 Applications Enabled 00:03:34.234 ================= 00:03:34.234 00:03:34.234 apps: 00:03:34.234 00:03:34.234 00:03:34.234 Message: 00:03:34.234 ================= 00:03:34.234 Libraries Enabled 00:03:34.234 ================= 00:03:34.234 00:03:34.234 libs: 00:03:34.234 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:34.234 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:34.234 cryptodev, dmadev, power, reorder, security, vhost, 00:03:34.234 00:03:34.234 Message: 00:03:34.234 =============== 00:03:34.234 Drivers Enabled 00:03:34.234 =============== 00:03:34.234 00:03:34.234 common: 00:03:34.234 00:03:34.234 bus: 00:03:34.234 pci, vdev, 00:03:34.234 mempool: 00:03:34.234 ring, 00:03:34.234 dma: 00:03:34.234 00:03:34.234 net: 00:03:34.234 00:03:34.234 crypto: 00:03:34.234 00:03:34.234 compress: 00:03:34.234 00:03:34.234 vdpa: 00:03:34.234 00:03:34.234 00:03:34.234 Message: 00:03:34.234 ================= 00:03:34.234 Content Skipped 00:03:34.234 ================= 00:03:34.234 00:03:34.234 apps: 00:03:34.234 dumpcap: explicitly disabled via build config 00:03:34.234 graph: explicitly disabled via build config 00:03:34.234 pdump: explicitly disabled via build config 00:03:34.234 proc-info: explicitly disabled via build config 00:03:34.234 test-acl: explicitly disabled via build config 00:03:34.234 test-bbdev: explicitly disabled via build config 00:03:34.234 test-cmdline: explicitly disabled via build config 00:03:34.234 test-compress-perf: explicitly disabled via build config 00:03:34.234 test-crypto-perf: explicitly disabled via build config 00:03:34.234 test-dma-perf: explicitly disabled via build config 00:03:34.234 test-eventdev: explicitly disabled via build config 00:03:34.234 test-fib: explicitly disabled via build config 00:03:34.234 test-flow-perf: explicitly disabled via build config 00:03:34.234 test-gpudev: explicitly disabled via build config 00:03:34.234 test-mldev: explicitly disabled via build config 00:03:34.234 test-pipeline: explicitly disabled via build config 00:03:34.234 test-pmd: explicitly disabled via build config 00:03:34.234 test-regex: explicitly disabled via build config 00:03:34.234 test-sad: explicitly disabled via build config 00:03:34.234 test-security-perf: explicitly disabled via build config 00:03:34.234 00:03:34.234 libs: 00:03:34.234 argparse: explicitly disabled via build config 00:03:34.234 metrics: explicitly disabled via build config 00:03:34.234 acl: explicitly disabled via build config 00:03:34.234 bbdev: explicitly disabled via build config 00:03:34.234 bitratestats: explicitly disabled via build config 00:03:34.234 bpf: explicitly disabled via build config 00:03:34.234 cfgfile: explicitly disabled via build config 00:03:34.234 distributor: explicitly disabled via build config 00:03:34.234 efd: explicitly disabled via build config 00:03:34.234 eventdev: explicitly disabled via build config 00:03:34.234 dispatcher: explicitly disabled via build config 00:03:34.234 gpudev: explicitly disabled via build config 00:03:34.234 gro: explicitly disabled via build config 00:03:34.234 gso: explicitly disabled via build config 00:03:34.234 ip_frag: explicitly disabled via build config 00:03:34.234 jobstats: explicitly disabled via build config 00:03:34.234 latencystats: explicitly disabled via build config 00:03:34.234 lpm: explicitly disabled via build config 00:03:34.234 member: explicitly disabled via build config 00:03:34.234 pcapng: explicitly disabled via build config 00:03:34.234 rawdev: explicitly disabled via build config 00:03:34.234 regexdev: explicitly disabled via build config 00:03:34.234 mldev: explicitly disabled via build config 00:03:34.235 rib: explicitly disabled via build config 00:03:34.235 sched: explicitly disabled via build config 00:03:34.235 stack: explicitly disabled via build config 00:03:34.235 ipsec: explicitly disabled via build config 00:03:34.235 pdcp: explicitly disabled via build config 00:03:34.235 fib: explicitly disabled via build config 00:03:34.235 port: explicitly disabled via build config 00:03:34.235 pdump: explicitly disabled via build config 00:03:34.235 table: explicitly disabled via build config 00:03:34.235 pipeline: explicitly disabled via build config 00:03:34.235 graph: explicitly disabled via build config 00:03:34.235 node: explicitly disabled via build config 00:03:34.235 00:03:34.235 drivers: 00:03:34.235 common/cpt: not in enabled drivers build config 00:03:34.235 common/dpaax: not in enabled drivers build config 00:03:34.235 common/iavf: not in enabled drivers build config 00:03:34.235 common/idpf: not in enabled drivers build config 00:03:34.235 common/ionic: not in enabled drivers build config 00:03:34.235 common/mvep: not in enabled drivers build config 00:03:34.235 common/octeontx: not in enabled drivers build config 00:03:34.235 bus/auxiliary: not in enabled drivers build config 00:03:34.235 bus/cdx: not in enabled drivers build config 00:03:34.235 bus/dpaa: not in enabled drivers build config 00:03:34.235 bus/fslmc: not in enabled drivers build config 00:03:34.235 bus/ifpga: not in enabled drivers build config 00:03:34.235 bus/platform: not in enabled drivers build config 00:03:34.235 bus/uacce: not in enabled drivers build config 00:03:34.235 bus/vmbus: not in enabled drivers build config 00:03:34.235 common/cnxk: not in enabled drivers build config 00:03:34.235 common/mlx5: not in enabled drivers build config 00:03:34.235 common/nfp: not in enabled drivers build config 00:03:34.235 common/nitrox: not in enabled drivers build config 00:03:34.235 common/qat: not in enabled drivers build config 00:03:34.235 common/sfc_efx: not in enabled drivers build config 00:03:34.235 mempool/bucket: not in enabled drivers build config 00:03:34.235 mempool/cnxk: not in enabled drivers build config 00:03:34.235 mempool/dpaa: not in enabled drivers build config 00:03:34.235 mempool/dpaa2: not in enabled drivers build config 00:03:34.235 mempool/octeontx: not in enabled drivers build config 00:03:34.235 mempool/stack: not in enabled drivers build config 00:03:34.235 dma/cnxk: not in enabled drivers build config 00:03:34.235 dma/dpaa: not in enabled drivers build config 00:03:34.235 dma/dpaa2: not in enabled drivers build config 00:03:34.235 dma/hisilicon: not in enabled drivers build config 00:03:34.235 dma/idxd: not in enabled drivers build config 00:03:34.235 dma/ioat: not in enabled drivers build config 00:03:34.235 dma/skeleton: not in enabled drivers build config 00:03:34.235 net/af_packet: not in enabled drivers build config 00:03:34.235 net/af_xdp: not in enabled drivers build config 00:03:34.235 net/ark: not in enabled drivers build config 00:03:34.235 net/atlantic: not in enabled drivers build config 00:03:34.235 net/avp: not in enabled drivers build config 00:03:34.235 net/axgbe: not in enabled drivers build config 00:03:34.235 net/bnx2x: not in enabled drivers build config 00:03:34.235 net/bnxt: not in enabled drivers build config 00:03:34.235 net/bonding: not in enabled drivers build config 00:03:34.235 net/cnxk: not in enabled drivers build config 00:03:34.235 net/cpfl: not in enabled drivers build config 00:03:34.235 net/cxgbe: not in enabled drivers build config 00:03:34.235 net/dpaa: not in enabled drivers build config 00:03:34.235 net/dpaa2: not in enabled drivers build config 00:03:34.235 net/e1000: not in enabled drivers build config 00:03:34.235 net/ena: not in enabled drivers build config 00:03:34.235 net/enetc: not in enabled drivers build config 00:03:34.235 net/enetfec: not in enabled drivers build config 00:03:34.235 net/enic: not in enabled drivers build config 00:03:34.235 net/failsafe: not in enabled drivers build config 00:03:34.235 net/fm10k: not in enabled drivers build config 00:03:34.235 net/gve: not in enabled drivers build config 00:03:34.235 net/hinic: not in enabled drivers build config 00:03:34.235 net/hns3: not in enabled drivers build config 00:03:34.235 net/i40e: not in enabled drivers build config 00:03:34.235 net/iavf: not in enabled drivers build config 00:03:34.235 net/ice: not in enabled drivers build config 00:03:34.235 net/idpf: not in enabled drivers build config 00:03:34.235 net/igc: not in enabled drivers build config 00:03:34.235 net/ionic: not in enabled drivers build config 00:03:34.235 net/ipn3ke: not in enabled drivers build config 00:03:34.235 net/ixgbe: not in enabled drivers build config 00:03:34.235 net/mana: not in enabled drivers build config 00:03:34.235 net/memif: not in enabled drivers build config 00:03:34.235 net/mlx4: not in enabled drivers build config 00:03:34.235 net/mlx5: not in enabled drivers build config 00:03:34.235 net/mvneta: not in enabled drivers build config 00:03:34.235 net/mvpp2: not in enabled drivers build config 00:03:34.235 net/netvsc: not in enabled drivers build config 00:03:34.235 net/nfb: not in enabled drivers build config 00:03:34.235 net/nfp: not in enabled drivers build config 00:03:34.235 net/ngbe: not in enabled drivers build config 00:03:34.235 net/null: not in enabled drivers build config 00:03:34.235 net/octeontx: not in enabled drivers build config 00:03:34.235 net/octeon_ep: not in enabled drivers build config 00:03:34.235 net/pcap: not in enabled drivers build config 00:03:34.235 net/pfe: not in enabled drivers build config 00:03:34.235 net/qede: not in enabled drivers build config 00:03:34.235 net/ring: not in enabled drivers build config 00:03:34.235 net/sfc: not in enabled drivers build config 00:03:34.235 net/softnic: not in enabled drivers build config 00:03:34.235 net/tap: not in enabled drivers build config 00:03:34.235 net/thunderx: not in enabled drivers build config 00:03:34.235 net/txgbe: not in enabled drivers build config 00:03:34.235 net/vdev_netvsc: not in enabled drivers build config 00:03:34.235 net/vhost: not in enabled drivers build config 00:03:34.235 net/virtio: not in enabled drivers build config 00:03:34.235 net/vmxnet3: not in enabled drivers build config 00:03:34.235 raw/*: missing internal dependency, "rawdev" 00:03:34.235 crypto/armv8: not in enabled drivers build config 00:03:34.235 crypto/bcmfs: not in enabled drivers build config 00:03:34.235 crypto/caam_jr: not in enabled drivers build config 00:03:34.235 crypto/ccp: not in enabled drivers build config 00:03:34.235 crypto/cnxk: not in enabled drivers build config 00:03:34.235 crypto/dpaa_sec: not in enabled drivers build config 00:03:34.235 crypto/dpaa2_sec: not in enabled drivers build config 00:03:34.235 crypto/ipsec_mb: not in enabled drivers build config 00:03:34.235 crypto/mlx5: not in enabled drivers build config 00:03:34.235 crypto/mvsam: not in enabled drivers build config 00:03:34.235 crypto/nitrox: not in enabled drivers build config 00:03:34.235 crypto/null: not in enabled drivers build config 00:03:34.235 crypto/octeontx: not in enabled drivers build config 00:03:34.235 crypto/openssl: not in enabled drivers build config 00:03:34.235 crypto/scheduler: not in enabled drivers build config 00:03:34.235 crypto/uadk: not in enabled drivers build config 00:03:34.235 crypto/virtio: not in enabled drivers build config 00:03:34.235 compress/isal: not in enabled drivers build config 00:03:34.235 compress/mlx5: not in enabled drivers build config 00:03:34.235 compress/nitrox: not in enabled drivers build config 00:03:34.235 compress/octeontx: not in enabled drivers build config 00:03:34.235 compress/zlib: not in enabled drivers build config 00:03:34.235 regex/*: missing internal dependency, "regexdev" 00:03:34.235 ml/*: missing internal dependency, "mldev" 00:03:34.235 vdpa/ifc: not in enabled drivers build config 00:03:34.235 vdpa/mlx5: not in enabled drivers build config 00:03:34.235 vdpa/nfp: not in enabled drivers build config 00:03:34.235 vdpa/sfc: not in enabled drivers build config 00:03:34.235 event/*: missing internal dependency, "eventdev" 00:03:34.235 baseband/*: missing internal dependency, "bbdev" 00:03:34.235 gpu/*: missing internal dependency, "gpudev" 00:03:34.235 00:03:34.235 00:03:34.235 Build targets in project: 84 00:03:34.235 00:03:34.235 DPDK 24.03.0 00:03:34.235 00:03:34.235 User defined options 00:03:34.235 buildtype : debug 00:03:34.235 default_library : shared 00:03:34.235 libdir : lib 00:03:34.235 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:34.235 b_sanitize : address 00:03:34.235 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:34.235 c_link_args : 00:03:34.235 cpu_instruction_set: native 00:03:34.235 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:34.235 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:34.235 enable_docs : false 00:03:34.235 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:34.236 enable_kmods : false 00:03:34.236 max_lcores : 128 00:03:34.236 tests : false 00:03:34.236 00:03:34.236 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.503 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:34.811 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:34.811 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:34.811 [3/267] Linking static target lib/librte_kvargs.a 00:03:34.811 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:34.811 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:34.811 [6/267] Linking static target lib/librte_log.a 00:03:35.069 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:35.069 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:35.069 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:35.069 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.069 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:35.069 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:35.069 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:35.069 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:35.069 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:35.069 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:35.327 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:35.327 [18/267] Linking static target lib/librte_telemetry.a 00:03:35.585 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:35.585 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:35.585 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:35.585 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.585 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:35.585 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:35.585 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:35.585 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:35.585 [27/267] Linking target lib/librte_log.so.24.1 00:03:35.585 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:35.843 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:35.843 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:35.843 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:35.843 [32/267] Linking target lib/librte_kvargs.so.24.1 00:03:36.102 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:36.102 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.102 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:36.102 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:36.102 [37/267] Linking target lib/librte_telemetry.so.24.1 00:03:36.102 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:36.102 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:36.102 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:36.102 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:36.102 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:36.102 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:36.102 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:36.361 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:36.361 [46/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:36.361 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:36.361 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:36.620 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:36.620 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:36.620 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:36.620 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:36.620 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:36.878 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:36.878 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:36.878 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:36.878 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:36.878 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:36.878 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:36.878 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:36.878 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:37.136 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:37.136 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:37.136 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:37.136 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:37.136 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:37.136 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:37.395 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:37.395 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:37.395 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:37.395 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:37.395 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:37.395 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:37.395 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:37.395 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:37.395 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:37.653 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:37.653 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:37.653 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:37.950 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:37.950 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:37.950 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:37.950 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:37.950 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:38.208 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:38.208 [86/267] Linking static target lib/librte_ring.a 00:03:38.208 [87/267] Linking static target lib/librte_eal.a 00:03:38.208 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:38.208 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:38.208 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:38.208 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:38.208 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:38.467 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:38.467 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:38.467 [95/267] Linking static target lib/librte_mempool.a 00:03:38.467 [96/267] Linking static target lib/librte_rcu.a 00:03:38.467 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:38.467 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:38.467 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.725 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:38.725 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:38.725 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:38.725 [103/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:38.725 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.725 [105/267] Linking static target lib/librte_meter.a 00:03:38.725 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:38.725 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:38.983 [108/267] Linking static target lib/librte_net.a 00:03:38.983 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:38.983 [110/267] Linking static target lib/librte_mbuf.a 00:03:38.983 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:39.241 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.241 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:39.241 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:39.241 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.241 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:39.499 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:39.499 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.499 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:39.757 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:39.758 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:39.758 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.758 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:40.015 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:40.015 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:40.015 [126/267] Linking static target lib/librte_pci.a 00:03:40.015 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:40.015 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:40.015 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:40.015 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:40.015 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:40.015 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:40.274 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:40.274 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:40.274 [135/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.274 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:40.274 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:40.274 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:40.274 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:40.274 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:40.274 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:40.274 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:40.274 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:40.532 [144/267] Linking static target lib/librte_cmdline.a 00:03:40.532 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:40.532 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:40.532 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:40.532 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:40.789 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:40.789 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:40.789 [151/267] Linking static target lib/librte_timer.a 00:03:40.789 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:41.047 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:41.047 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:41.047 [155/267] Linking static target lib/librte_compressdev.a 00:03:41.047 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:41.047 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:41.047 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:41.047 [159/267] Linking static target lib/librte_ethdev.a 00:03:41.305 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.305 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:41.305 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:41.563 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:41.563 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:41.563 [165/267] Linking static target lib/librte_dmadev.a 00:03:41.563 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:41.563 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:41.563 [168/267] Linking static target lib/librte_hash.a 00:03:41.821 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:41.821 [170/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.821 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:41.821 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.821 [173/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:41.821 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:42.079 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:42.079 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:42.079 [177/267] Linking static target lib/librte_cryptodev.a 00:03:42.079 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:42.079 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:42.337 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:42.337 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:42.337 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:42.337 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.337 [184/267] Linking static target lib/librte_power.a 00:03:42.595 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:42.595 [186/267] Linking static target lib/librte_reorder.a 00:03:42.595 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:42.595 [188/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.852 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:42.852 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:42.852 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:42.852 [192/267] Linking static target lib/librte_security.a 00:03:43.110 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.368 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.368 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:43.368 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:43.368 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.368 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:43.626 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:43.626 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:43.884 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:43.884 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:43.884 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:43.884 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:44.141 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:44.141 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:44.141 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:44.141 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:44.141 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:44.141 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.422 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:44.422 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:44.422 [213/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:44.422 [214/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:44.422 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:44.422 [216/267] Linking static target drivers/librte_bus_vdev.a 00:03:44.422 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:44.422 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:44.422 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:44.422 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:44.422 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:44.679 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.679 [223/267] Linking static target drivers/librte_mempool_ring.a 00:03:44.679 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.679 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.937 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.502 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:46.435 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.435 [229/267] Linking target lib/librte_eal.so.24.1 00:03:46.435 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:46.435 [231/267] Linking target lib/librte_timer.so.24.1 00:03:46.435 [232/267] Linking target lib/librte_pci.so.24.1 00:03:46.435 [233/267] Linking target lib/librte_ring.so.24.1 00:03:46.435 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:46.435 [235/267] Linking target lib/librte_meter.so.24.1 00:03:46.435 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:46.694 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:46.694 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:46.694 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:46.694 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:46.694 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:46.694 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:46.694 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:46.694 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:46.694 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:46.694 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:46.694 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:46.694 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:46.952 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:46.952 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:46.952 [251/267] Linking target lib/librte_net.so.24.1 00:03:46.952 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:46.952 [253/267] Linking target lib/librte_reorder.so.24.1 00:03:46.952 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:46.952 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:46.952 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:47.299 [257/267] Linking target lib/librte_hash.so.24.1 00:03:47.299 [258/267] Linking target lib/librte_security.so.24.1 00:03:47.299 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.299 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:47.299 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:47.299 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:47.556 [263/267] Linking target lib/librte_power.so.24.1 00:03:48.123 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:48.123 [265/267] Linking static target lib/librte_vhost.a 00:03:49.498 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.498 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:49.498 INFO: autodetecting backend as ninja 00:03:49.498 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:07.573 CC lib/log/log_flags.o 00:04:07.573 CC lib/log/log.o 00:04:07.573 CC lib/log/log_deprecated.o 00:04:07.573 CC lib/ut_mock/mock.o 00:04:07.573 CC lib/ut/ut.o 00:04:07.573 LIB libspdk_log.a 00:04:07.573 SO libspdk_log.so.7.1 00:04:07.573 LIB libspdk_ut_mock.a 00:04:07.573 LIB libspdk_ut.a 00:04:07.573 SO libspdk_ut_mock.so.6.0 00:04:07.573 SO libspdk_ut.so.2.0 00:04:07.573 SYMLINK libspdk_log.so 00:04:07.573 SYMLINK libspdk_ut_mock.so 00:04:07.573 SYMLINK libspdk_ut.so 00:04:07.573 CC lib/dma/dma.o 00:04:07.573 CC lib/ioat/ioat.o 00:04:07.573 CC lib/util/base64.o 00:04:07.573 CC lib/util/crc16.o 00:04:07.573 CC lib/util/bit_array.o 00:04:07.573 CC lib/util/cpuset.o 00:04:07.573 CC lib/util/crc32c.o 00:04:07.573 CC lib/util/crc32.o 00:04:07.573 CXX lib/trace_parser/trace.o 00:04:07.573 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.573 CC lib/util/crc32_ieee.o 00:04:07.573 CC lib/util/crc64.o 00:04:07.573 CC lib/util/dif.o 00:04:07.573 LIB libspdk_dma.a 00:04:07.573 CC lib/util/fd.o 00:04:07.573 CC lib/util/fd_group.o 00:04:07.573 SO libspdk_dma.so.5.0 00:04:07.573 CC lib/util/file.o 00:04:07.573 SYMLINK libspdk_dma.so 00:04:07.573 CC lib/util/hexlify.o 00:04:07.573 CC lib/util/iov.o 00:04:07.573 CC lib/vfio_user/host/vfio_user.o 00:04:07.573 CC lib/util/math.o 00:04:07.573 LIB libspdk_ioat.a 00:04:07.831 CC lib/util/net.o 00:04:07.831 SO libspdk_ioat.so.7.0 00:04:07.831 CC lib/util/pipe.o 00:04:07.831 CC lib/util/strerror_tls.o 00:04:07.831 SYMLINK libspdk_ioat.so 00:04:07.831 CC lib/util/string.o 00:04:07.831 CC lib/util/uuid.o 00:04:07.831 CC lib/util/xor.o 00:04:07.831 CC lib/util/zipf.o 00:04:07.831 CC lib/util/md5.o 00:04:07.831 LIB libspdk_vfio_user.a 00:04:07.831 SO libspdk_vfio_user.so.5.0 00:04:07.831 SYMLINK libspdk_vfio_user.so 00:04:08.090 LIB libspdk_util.a 00:04:08.090 LIB libspdk_trace_parser.a 00:04:08.090 SO libspdk_util.so.10.0 00:04:08.348 SO libspdk_trace_parser.so.6.0 00:04:08.348 SYMLINK libspdk_trace_parser.so 00:04:08.348 SYMLINK libspdk_util.so 00:04:08.348 CC lib/json/json_parse.o 00:04:08.348 CC lib/rdma_provider/common.o 00:04:08.348 CC lib/json/json_write.o 00:04:08.348 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:08.606 CC lib/json/json_util.o 00:04:08.606 CC lib/vmd/vmd.o 00:04:08.606 CC lib/conf/conf.o 00:04:08.606 CC lib/idxd/idxd.o 00:04:08.606 CC lib/env_dpdk/env.o 00:04:08.606 CC lib/rdma_utils/rdma_utils.o 00:04:08.606 CC lib/env_dpdk/memory.o 00:04:08.606 LIB libspdk_conf.a 00:04:08.606 CC lib/env_dpdk/pci.o 00:04:08.606 LIB libspdk_rdma_provider.a 00:04:08.606 SO libspdk_conf.so.6.0 00:04:08.606 SO libspdk_rdma_provider.so.6.0 00:04:08.606 CC lib/env_dpdk/init.o 00:04:08.606 LIB libspdk_rdma_utils.a 00:04:08.606 SYMLINK libspdk_conf.so 00:04:08.606 SO libspdk_rdma_utils.so.1.0 00:04:08.606 CC lib/vmd/led.o 00:04:08.864 SYMLINK libspdk_rdma_provider.so 00:04:08.864 CC lib/idxd/idxd_user.o 00:04:08.864 LIB libspdk_json.a 00:04:08.864 SYMLINK libspdk_rdma_utils.so 00:04:08.864 CC lib/idxd/idxd_kernel.o 00:04:08.864 SO libspdk_json.so.6.0 00:04:08.864 CC lib/env_dpdk/threads.o 00:04:08.864 SYMLINK libspdk_json.so 00:04:08.864 CC lib/env_dpdk/pci_ioat.o 00:04:08.864 CC lib/env_dpdk/pci_virtio.o 00:04:08.864 CC lib/env_dpdk/pci_vmd.o 00:04:08.864 CC lib/env_dpdk/pci_idxd.o 00:04:08.864 CC lib/env_dpdk/pci_event.o 00:04:09.121 CC lib/env_dpdk/sigbus_handler.o 00:04:09.121 CC lib/env_dpdk/pci_dpdk.o 00:04:09.121 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:09.121 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:09.121 LIB libspdk_idxd.a 00:04:09.121 SO libspdk_idxd.so.12.1 00:04:09.379 CC lib/jsonrpc/jsonrpc_server.o 00:04:09.379 CC lib/jsonrpc/jsonrpc_client.o 00:04:09.379 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.379 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:09.379 LIB libspdk_vmd.a 00:04:09.379 SYMLINK libspdk_idxd.so 00:04:09.379 SO libspdk_vmd.so.6.0 00:04:09.379 SYMLINK libspdk_vmd.so 00:04:09.637 LIB libspdk_jsonrpc.a 00:04:09.637 SO libspdk_jsonrpc.so.6.0 00:04:09.637 SYMLINK libspdk_jsonrpc.so 00:04:09.895 LIB libspdk_env_dpdk.a 00:04:09.895 SO libspdk_env_dpdk.so.15.0 00:04:09.895 CC lib/rpc/rpc.o 00:04:09.895 SYMLINK libspdk_env_dpdk.so 00:04:10.153 LIB libspdk_rpc.a 00:04:10.153 SO libspdk_rpc.so.6.0 00:04:10.153 SYMLINK libspdk_rpc.so 00:04:10.410 CC lib/notify/notify.o 00:04:10.410 CC lib/notify/notify_rpc.o 00:04:10.410 CC lib/trace/trace_flags.o 00:04:10.410 CC lib/trace/trace.o 00:04:10.410 CC lib/trace/trace_rpc.o 00:04:10.410 CC lib/keyring/keyring.o 00:04:10.410 CC lib/keyring/keyring_rpc.o 00:04:10.410 LIB libspdk_notify.a 00:04:10.410 SO libspdk_notify.so.6.0 00:04:10.668 SYMLINK libspdk_notify.so 00:04:10.668 LIB libspdk_keyring.a 00:04:10.668 LIB libspdk_trace.a 00:04:10.668 SO libspdk_keyring.so.2.0 00:04:10.668 SO libspdk_trace.so.11.0 00:04:10.668 SYMLINK libspdk_keyring.so 00:04:10.668 SYMLINK libspdk_trace.so 00:04:10.926 CC lib/sock/sock.o 00:04:10.926 CC lib/sock/sock_rpc.o 00:04:10.926 CC lib/thread/thread.o 00:04:10.926 CC lib/thread/iobuf.o 00:04:11.183 LIB libspdk_sock.a 00:04:11.183 SO libspdk_sock.so.10.0 00:04:11.441 SYMLINK libspdk_sock.so 00:04:11.699 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:11.699 CC lib/nvme/nvme_ctrlr.o 00:04:11.699 CC lib/nvme/nvme_pcie_common.o 00:04:11.699 CC lib/nvme/nvme_fabric.o 00:04:11.699 CC lib/nvme/nvme_ns_cmd.o 00:04:11.699 CC lib/nvme/nvme_pcie.o 00:04:11.699 CC lib/nvme/nvme_qpair.o 00:04:11.699 CC lib/nvme/nvme_ns.o 00:04:11.699 CC lib/nvme/nvme.o 00:04:12.264 CC lib/nvme/nvme_quirks.o 00:04:12.264 CC lib/nvme/nvme_transport.o 00:04:12.264 CC lib/nvme/nvme_discovery.o 00:04:12.264 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:12.522 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:12.522 CC lib/nvme/nvme_tcp.o 00:04:12.522 LIB libspdk_thread.a 00:04:12.522 SO libspdk_thread.so.10.2 00:04:12.522 CC lib/nvme/nvme_opal.o 00:04:12.522 CC lib/nvme/nvme_io_msg.o 00:04:12.522 SYMLINK libspdk_thread.so 00:04:12.522 CC lib/nvme/nvme_poll_group.o 00:04:12.522 CC lib/nvme/nvme_zns.o 00:04:12.780 CC lib/nvme/nvme_stubs.o 00:04:12.780 CC lib/nvme/nvme_auth.o 00:04:12.780 CC lib/nvme/nvme_cuse.o 00:04:13.038 CC lib/nvme/nvme_rdma.o 00:04:13.296 CC lib/accel/accel.o 00:04:13.296 CC lib/blob/blobstore.o 00:04:13.296 CC lib/init/json_config.o 00:04:13.296 CC lib/virtio/virtio.o 00:04:13.296 CC lib/fsdev/fsdev.o 00:04:13.554 CC lib/init/subsystem.o 00:04:13.554 CC lib/virtio/virtio_vhost_user.o 00:04:13.811 CC lib/init/subsystem_rpc.o 00:04:13.811 CC lib/init/rpc.o 00:04:13.811 CC lib/virtio/virtio_vfio_user.o 00:04:13.811 CC lib/blob/request.o 00:04:13.811 CC lib/accel/accel_rpc.o 00:04:13.811 LIB libspdk_init.a 00:04:13.811 SO libspdk_init.so.6.0 00:04:14.069 CC lib/blob/zeroes.o 00:04:14.069 SYMLINK libspdk_init.so 00:04:14.069 CC lib/accel/accel_sw.o 00:04:14.069 CC lib/virtio/virtio_pci.o 00:04:14.069 CC lib/fsdev/fsdev_io.o 00:04:14.069 CC lib/fsdev/fsdev_rpc.o 00:04:14.069 CC lib/blob/blob_bs_dev.o 00:04:14.069 LIB libspdk_nvme.a 00:04:14.336 LIB libspdk_accel.a 00:04:14.336 CC lib/event/app.o 00:04:14.336 CC lib/event/reactor.o 00:04:14.336 CC lib/event/log_rpc.o 00:04:14.336 CC lib/event/app_rpc.o 00:04:14.336 SO libspdk_accel.so.16.0 00:04:14.336 LIB libspdk_virtio.a 00:04:14.336 SO libspdk_nvme.so.14.0 00:04:14.336 CC lib/event/scheduler_static.o 00:04:14.336 LIB libspdk_fsdev.a 00:04:14.336 SYMLINK libspdk_accel.so 00:04:14.336 SO libspdk_virtio.so.7.0 00:04:14.336 SO libspdk_fsdev.so.1.0 00:04:14.336 SYMLINK libspdk_virtio.so 00:04:14.336 SYMLINK libspdk_fsdev.so 00:04:14.607 CC lib/bdev/bdev_rpc.o 00:04:14.607 CC lib/bdev/part.o 00:04:14.607 CC lib/bdev/scsi_nvme.o 00:04:14.607 CC lib/bdev/bdev.o 00:04:14.607 CC lib/bdev/bdev_zone.o 00:04:14.607 SYMLINK libspdk_nvme.so 00:04:14.607 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:14.866 LIB libspdk_event.a 00:04:14.866 SO libspdk_event.so.14.0 00:04:14.866 SYMLINK libspdk_event.so 00:04:15.433 LIB libspdk_fuse_dispatcher.a 00:04:15.433 SO libspdk_fuse_dispatcher.so.1.0 00:04:15.433 SYMLINK libspdk_fuse_dispatcher.so 00:04:16.807 LIB libspdk_blob.a 00:04:16.807 SO libspdk_blob.so.11.0 00:04:16.807 SYMLINK libspdk_blob.so 00:04:17.104 CC lib/blobfs/tree.o 00:04:17.104 CC lib/blobfs/blobfs.o 00:04:17.104 CC lib/lvol/lvol.o 00:04:17.364 LIB libspdk_bdev.a 00:04:17.364 SO libspdk_bdev.so.17.0 00:04:17.622 SYMLINK libspdk_bdev.so 00:04:17.622 CC lib/nbd/nbd.o 00:04:17.622 CC lib/nbd/nbd_rpc.o 00:04:17.622 CC lib/ublk/ublk.o 00:04:17.622 CC lib/ublk/ublk_rpc.o 00:04:17.622 CC lib/scsi/lun.o 00:04:17.622 CC lib/scsi/dev.o 00:04:17.622 CC lib/nvmf/ctrlr.o 00:04:17.880 CC lib/ftl/ftl_core.o 00:04:17.880 LIB libspdk_blobfs.a 00:04:17.880 CC lib/ftl/ftl_init.o 00:04:17.880 SO libspdk_blobfs.so.10.0 00:04:17.880 CC lib/ftl/ftl_layout.o 00:04:17.880 SYMLINK libspdk_blobfs.so 00:04:17.880 CC lib/ftl/ftl_debug.o 00:04:17.880 CC lib/scsi/port.o 00:04:17.880 LIB libspdk_lvol.a 00:04:17.880 SO libspdk_lvol.so.10.0 00:04:18.138 CC lib/scsi/scsi.o 00:04:18.138 CC lib/scsi/scsi_bdev.o 00:04:18.138 SYMLINK libspdk_lvol.so 00:04:18.138 CC lib/scsi/scsi_pr.o 00:04:18.138 LIB libspdk_nbd.a 00:04:18.138 CC lib/ftl/ftl_io.o 00:04:18.138 SO libspdk_nbd.so.7.0 00:04:18.138 CC lib/ftl/ftl_sb.o 00:04:18.138 CC lib/scsi/scsi_rpc.o 00:04:18.138 CC lib/nvmf/ctrlr_discovery.o 00:04:18.138 SYMLINK libspdk_nbd.so 00:04:18.138 CC lib/nvmf/ctrlr_bdev.o 00:04:18.138 CC lib/nvmf/subsystem.o 00:04:18.397 CC lib/ftl/ftl_l2p.o 00:04:18.397 LIB libspdk_ublk.a 00:04:18.397 CC lib/ftl/ftl_l2p_flat.o 00:04:18.397 SO libspdk_ublk.so.3.0 00:04:18.397 CC lib/ftl/ftl_nv_cache.o 00:04:18.397 SYMLINK libspdk_ublk.so 00:04:18.397 CC lib/nvmf/nvmf.o 00:04:18.397 CC lib/nvmf/nvmf_rpc.o 00:04:18.397 CC lib/nvmf/transport.o 00:04:18.397 CC lib/ftl/ftl_band.o 00:04:18.397 CC lib/scsi/task.o 00:04:18.657 CC lib/nvmf/tcp.o 00:04:18.657 LIB libspdk_scsi.a 00:04:18.657 SO libspdk_scsi.so.9.0 00:04:18.657 SYMLINK libspdk_scsi.so 00:04:18.657 CC lib/ftl/ftl_band_ops.o 00:04:18.915 CC lib/ftl/ftl_writer.o 00:04:18.915 CC lib/nvmf/stubs.o 00:04:18.915 CC lib/ftl/ftl_rq.o 00:04:19.174 CC lib/nvmf/mdns_server.o 00:04:19.174 CC lib/nvmf/rdma.o 00:04:19.174 CC lib/iscsi/conn.o 00:04:19.432 CC lib/nvmf/auth.o 00:04:19.432 CC lib/ftl/ftl_reloc.o 00:04:19.432 CC lib/vhost/vhost.o 00:04:19.432 CC lib/vhost/vhost_rpc.o 00:04:19.432 CC lib/vhost/vhost_scsi.o 00:04:19.432 CC lib/vhost/vhost_blk.o 00:04:19.691 CC lib/vhost/rte_vhost_user.o 00:04:19.691 CC lib/ftl/ftl_l2p_cache.o 00:04:19.949 CC lib/iscsi/init_grp.o 00:04:19.949 CC lib/iscsi/iscsi.o 00:04:19.949 CC lib/ftl/ftl_p2l.o 00:04:20.208 CC lib/ftl/ftl_p2l_log.o 00:04:20.208 CC lib/ftl/mngt/ftl_mngt.o 00:04:20.208 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:20.208 CC lib/iscsi/param.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:20.467 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:20.467 CC lib/iscsi/portal_grp.o 00:04:20.467 CC lib/iscsi/tgt_node.o 00:04:20.724 CC lib/iscsi/iscsi_subsystem.o 00:04:20.724 LIB libspdk_vhost.a 00:04:20.724 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:20.724 SO libspdk_vhost.so.8.0 00:04:20.724 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:20.724 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:20.724 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:20.724 SYMLINK libspdk_vhost.so 00:04:20.724 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:20.724 CC lib/iscsi/iscsi_rpc.o 00:04:20.982 CC lib/ftl/utils/ftl_conf.o 00:04:20.982 CC lib/ftl/utils/ftl_md.o 00:04:20.982 CC lib/ftl/utils/ftl_mempool.o 00:04:20.982 CC lib/ftl/utils/ftl_bitmap.o 00:04:20.982 CC lib/ftl/utils/ftl_property.o 00:04:20.982 CC lib/iscsi/task.o 00:04:20.982 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:21.241 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:21.241 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:21.241 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:21.241 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:21.241 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:21.241 LIB libspdk_iscsi.a 00:04:21.241 SO libspdk_iscsi.so.8.0 00:04:21.241 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:21.241 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:21.241 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:21.500 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:21.500 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:21.500 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:21.500 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:21.500 CC lib/ftl/base/ftl_base_dev.o 00:04:21.500 LIB libspdk_nvmf.a 00:04:21.500 SYMLINK libspdk_iscsi.so 00:04:21.500 CC lib/ftl/base/ftl_base_bdev.o 00:04:21.500 CC lib/ftl/ftl_trace.o 00:04:21.758 SO libspdk_nvmf.so.19.1 00:04:21.758 LIB libspdk_ftl.a 00:04:21.758 SYMLINK libspdk_nvmf.so 00:04:22.016 SO libspdk_ftl.so.9.0 00:04:22.582 SYMLINK libspdk_ftl.so 00:04:22.840 CC module/env_dpdk/env_dpdk_rpc.o 00:04:22.840 CC module/blob/bdev/blob_bdev.o 00:04:22.840 CC module/accel/error/accel_error.o 00:04:22.840 CC module/keyring/linux/keyring.o 00:04:22.840 CC module/sock/posix/posix.o 00:04:22.840 CC module/fsdev/aio/fsdev_aio.o 00:04:22.840 CC module/accel/dsa/accel_dsa.o 00:04:22.840 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:22.840 CC module/keyring/file/keyring.o 00:04:22.840 CC module/accel/ioat/accel_ioat.o 00:04:22.840 LIB libspdk_env_dpdk_rpc.a 00:04:22.840 SO libspdk_env_dpdk_rpc.so.6.0 00:04:22.840 CC module/keyring/linux/keyring_rpc.o 00:04:22.840 SYMLINK libspdk_env_dpdk_rpc.so 00:04:22.840 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:22.840 CC module/accel/error/accel_error_rpc.o 00:04:22.840 CC module/keyring/file/keyring_rpc.o 00:04:23.097 CC module/accel/ioat/accel_ioat_rpc.o 00:04:23.097 LIB libspdk_scheduler_dynamic.a 00:04:23.097 LIB libspdk_blob_bdev.a 00:04:23.097 LIB libspdk_keyring_linux.a 00:04:23.097 SO libspdk_scheduler_dynamic.so.4.0 00:04:23.097 SO libspdk_blob_bdev.so.11.0 00:04:23.097 SO libspdk_keyring_linux.so.1.0 00:04:23.097 CC module/accel/dsa/accel_dsa_rpc.o 00:04:23.097 LIB libspdk_accel_error.a 00:04:23.097 LIB libspdk_keyring_file.a 00:04:23.097 SYMLINK libspdk_scheduler_dynamic.so 00:04:23.097 SYMLINK libspdk_blob_bdev.so 00:04:23.097 SO libspdk_keyring_file.so.2.0 00:04:23.097 CC module/fsdev/aio/linux_aio_mgr.o 00:04:23.097 SO libspdk_accel_error.so.2.0 00:04:23.097 SYMLINK libspdk_keyring_linux.so 00:04:23.097 LIB libspdk_accel_ioat.a 00:04:23.097 SO libspdk_accel_ioat.so.6.0 00:04:23.097 SYMLINK libspdk_accel_error.so 00:04:23.097 SYMLINK libspdk_keyring_file.so 00:04:23.097 LIB libspdk_accel_dsa.a 00:04:23.097 SYMLINK libspdk_accel_ioat.so 00:04:23.097 SO libspdk_accel_dsa.so.5.0 00:04:23.355 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:23.355 CC module/scheduler/gscheduler/gscheduler.o 00:04:23.355 SYMLINK libspdk_accel_dsa.so 00:04:23.355 CC module/accel/iaa/accel_iaa.o 00:04:23.355 CC module/bdev/delay/vbdev_delay.o 00:04:23.355 LIB libspdk_scheduler_dpdk_governor.a 00:04:23.355 CC module/bdev/error/vbdev_error.o 00:04:23.355 CC module/blobfs/bdev/blobfs_bdev.o 00:04:23.355 LIB libspdk_fsdev_aio.a 00:04:23.355 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:23.355 LIB libspdk_scheduler_gscheduler.a 00:04:23.355 SO libspdk_scheduler_gscheduler.so.4.0 00:04:23.355 SO libspdk_fsdev_aio.so.1.0 00:04:23.355 CC module/bdev/gpt/gpt.o 00:04:23.355 CC module/bdev/lvol/vbdev_lvol.o 00:04:23.355 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:23.355 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:23.612 SYMLINK libspdk_scheduler_gscheduler.so 00:04:23.612 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:23.612 SYMLINK libspdk_fsdev_aio.so 00:04:23.612 CC module/bdev/error/vbdev_error_rpc.o 00:04:23.612 CC module/accel/iaa/accel_iaa_rpc.o 00:04:23.612 LIB libspdk_sock_posix.a 00:04:23.612 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:23.612 SO libspdk_sock_posix.so.6.0 00:04:23.612 CC module/bdev/gpt/vbdev_gpt.o 00:04:23.612 SYMLINK libspdk_sock_posix.so 00:04:23.612 LIB libspdk_accel_iaa.a 00:04:23.612 LIB libspdk_blobfs_bdev.a 00:04:23.612 LIB libspdk_bdev_error.a 00:04:23.612 SO libspdk_accel_iaa.so.3.0 00:04:23.612 SO libspdk_bdev_error.so.6.0 00:04:23.612 SO libspdk_blobfs_bdev.so.6.0 00:04:23.612 LIB libspdk_bdev_delay.a 00:04:23.870 SO libspdk_bdev_delay.so.6.0 00:04:23.870 SYMLINK libspdk_accel_iaa.so 00:04:23.870 SYMLINK libspdk_bdev_error.so 00:04:23.870 SYMLINK libspdk_blobfs_bdev.so 00:04:23.870 CC module/bdev/malloc/bdev_malloc.o 00:04:23.870 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:23.870 CC module/bdev/null/bdev_null.o 00:04:23.870 SYMLINK libspdk_bdev_delay.so 00:04:23.870 CC module/bdev/nvme/bdev_nvme.o 00:04:23.870 LIB libspdk_bdev_gpt.a 00:04:23.870 SO libspdk_bdev_gpt.so.6.0 00:04:23.870 CC module/bdev/passthru/vbdev_passthru.o 00:04:23.870 CC module/bdev/raid/bdev_raid.o 00:04:23.870 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:23.870 SYMLINK libspdk_bdev_gpt.so 00:04:23.870 LIB libspdk_bdev_lvol.a 00:04:23.870 CC module/bdev/split/vbdev_split.o 00:04:24.138 SO libspdk_bdev_lvol.so.6.0 00:04:24.138 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:24.138 CC module/bdev/null/bdev_null_rpc.o 00:04:24.138 SYMLINK libspdk_bdev_lvol.so 00:04:24.138 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:24.138 CC module/bdev/split/vbdev_split_rpc.o 00:04:24.138 CC module/bdev/xnvme/bdev_xnvme.o 00:04:24.138 LIB libspdk_bdev_null.a 00:04:24.138 SO libspdk_bdev_null.so.6.0 00:04:24.138 LIB libspdk_bdev_malloc.a 00:04:24.138 SO libspdk_bdev_malloc.so.6.0 00:04:24.138 LIB libspdk_bdev_split.a 00:04:24.138 SYMLINK libspdk_bdev_null.so 00:04:24.138 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:24.395 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:24.395 SO libspdk_bdev_split.so.6.0 00:04:24.395 LIB libspdk_bdev_passthru.a 00:04:24.395 SYMLINK libspdk_bdev_malloc.so 00:04:24.395 SYMLINK libspdk_bdev_split.so 00:04:24.395 CC module/bdev/nvme/nvme_rpc.o 00:04:24.395 CC module/bdev/raid/bdev_raid_rpc.o 00:04:24.395 CC module/bdev/nvme/bdev_mdns_client.o 00:04:24.395 SO libspdk_bdev_passthru.so.6.0 00:04:24.395 LIB libspdk_bdev_xnvme.a 00:04:24.395 SYMLINK libspdk_bdev_passthru.so 00:04:24.395 CC module/bdev/nvme/vbdev_opal.o 00:04:24.395 LIB libspdk_bdev_zone_block.a 00:04:24.395 SO libspdk_bdev_xnvme.so.3.0 00:04:24.395 CC module/bdev/aio/bdev_aio.o 00:04:24.395 SO libspdk_bdev_zone_block.so.6.0 00:04:24.653 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:24.653 SYMLINK libspdk_bdev_xnvme.so 00:04:24.653 SYMLINK libspdk_bdev_zone_block.so 00:04:24.653 CC module/bdev/aio/bdev_aio_rpc.o 00:04:24.653 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:24.653 CC module/bdev/raid/bdev_raid_sb.o 00:04:24.653 CC module/bdev/raid/raid0.o 00:04:24.653 CC module/bdev/ftl/bdev_ftl.o 00:04:24.653 CC module/bdev/raid/raid1.o 00:04:24.911 CC module/bdev/raid/concat.o 00:04:24.911 LIB libspdk_bdev_aio.a 00:04:24.911 CC module/bdev/iscsi/bdev_iscsi.o 00:04:24.911 SO libspdk_bdev_aio.so.6.0 00:04:24.911 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:24.911 SYMLINK libspdk_bdev_aio.so 00:04:24.911 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:24.911 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:24.911 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:24.911 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:25.167 LIB libspdk_bdev_raid.a 00:04:25.167 LIB libspdk_bdev_ftl.a 00:04:25.167 SO libspdk_bdev_raid.so.6.0 00:04:25.167 SO libspdk_bdev_ftl.so.6.0 00:04:25.168 SYMLINK libspdk_bdev_ftl.so 00:04:25.168 SYMLINK libspdk_bdev_raid.so 00:04:25.168 LIB libspdk_bdev_iscsi.a 00:04:25.168 SO libspdk_bdev_iscsi.so.6.0 00:04:25.425 SYMLINK libspdk_bdev_iscsi.so 00:04:25.425 LIB libspdk_bdev_virtio.a 00:04:25.425 SO libspdk_bdev_virtio.so.6.0 00:04:25.425 SYMLINK libspdk_bdev_virtio.so 00:04:26.358 LIB libspdk_bdev_nvme.a 00:04:26.358 SO libspdk_bdev_nvme.so.7.0 00:04:26.616 SYMLINK libspdk_bdev_nvme.so 00:04:26.875 CC module/event/subsystems/keyring/keyring.o 00:04:26.875 CC module/event/subsystems/iobuf/iobuf.o 00:04:26.875 CC module/event/subsystems/vmd/vmd.o 00:04:26.875 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:26.875 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:26.875 CC module/event/subsystems/fsdev/fsdev.o 00:04:26.875 CC module/event/subsystems/scheduler/scheduler.o 00:04:26.875 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:26.875 CC module/event/subsystems/sock/sock.o 00:04:27.133 LIB libspdk_event_vhost_blk.a 00:04:27.133 LIB libspdk_event_keyring.a 00:04:27.133 SO libspdk_event_vhost_blk.so.3.0 00:04:27.133 LIB libspdk_event_fsdev.a 00:04:27.133 LIB libspdk_event_scheduler.a 00:04:27.133 SO libspdk_event_keyring.so.1.0 00:04:27.133 LIB libspdk_event_iobuf.a 00:04:27.133 SO libspdk_event_scheduler.so.4.0 00:04:27.133 LIB libspdk_event_vmd.a 00:04:27.133 SO libspdk_event_fsdev.so.1.0 00:04:27.133 LIB libspdk_event_sock.a 00:04:27.133 SYMLINK libspdk_event_vhost_blk.so 00:04:27.133 SO libspdk_event_iobuf.so.3.0 00:04:27.133 SYMLINK libspdk_event_keyring.so 00:04:27.133 SO libspdk_event_vmd.so.6.0 00:04:27.133 SO libspdk_event_sock.so.5.0 00:04:27.133 SYMLINK libspdk_event_scheduler.so 00:04:27.133 SYMLINK libspdk_event_fsdev.so 00:04:27.133 SYMLINK libspdk_event_sock.so 00:04:27.133 SYMLINK libspdk_event_iobuf.so 00:04:27.133 SYMLINK libspdk_event_vmd.so 00:04:27.390 CC module/event/subsystems/accel/accel.o 00:04:27.648 LIB libspdk_event_accel.a 00:04:27.648 SO libspdk_event_accel.so.6.0 00:04:27.648 SYMLINK libspdk_event_accel.so 00:04:27.936 CC module/event/subsystems/bdev/bdev.o 00:04:27.936 LIB libspdk_event_bdev.a 00:04:27.936 SO libspdk_event_bdev.so.6.0 00:04:27.936 SYMLINK libspdk_event_bdev.so 00:04:28.194 CC module/event/subsystems/ublk/ublk.o 00:04:28.194 CC module/event/subsystems/nbd/nbd.o 00:04:28.194 CC module/event/subsystems/scsi/scsi.o 00:04:28.194 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.194 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.194 LIB libspdk_event_ublk.a 00:04:28.452 LIB libspdk_event_nbd.a 00:04:28.452 SO libspdk_event_ublk.so.3.0 00:04:28.452 SO libspdk_event_nbd.so.6.0 00:04:28.452 LIB libspdk_event_scsi.a 00:04:28.452 SYMLINK libspdk_event_ublk.so 00:04:28.452 SO libspdk_event_scsi.so.6.0 00:04:28.452 SYMLINK libspdk_event_nbd.so 00:04:28.452 SYMLINK libspdk_event_scsi.so 00:04:28.452 LIB libspdk_event_nvmf.a 00:04:28.452 SO libspdk_event_nvmf.so.6.0 00:04:28.452 SYMLINK libspdk_event_nvmf.so 00:04:28.710 CC module/event/subsystems/iscsi/iscsi.o 00:04:28.710 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:28.710 LIB libspdk_event_iscsi.a 00:04:28.710 LIB libspdk_event_vhost_scsi.a 00:04:28.710 SO libspdk_event_vhost_scsi.so.3.0 00:04:28.710 SO libspdk_event_iscsi.so.6.0 00:04:28.710 SYMLINK libspdk_event_vhost_scsi.so 00:04:28.710 SYMLINK libspdk_event_iscsi.so 00:04:28.967 SO libspdk.so.6.0 00:04:28.967 SYMLINK libspdk.so 00:04:28.967 CC test/rpc_client/rpc_client_test.o 00:04:29.225 TEST_HEADER include/spdk/accel.h 00:04:29.225 TEST_HEADER include/spdk/accel_module.h 00:04:29.225 TEST_HEADER include/spdk/assert.h 00:04:29.225 CXX app/trace/trace.o 00:04:29.225 TEST_HEADER include/spdk/barrier.h 00:04:29.225 TEST_HEADER include/spdk/base64.h 00:04:29.225 TEST_HEADER include/spdk/bdev.h 00:04:29.225 TEST_HEADER include/spdk/bdev_module.h 00:04:29.225 TEST_HEADER include/spdk/bdev_zone.h 00:04:29.225 TEST_HEADER include/spdk/bit_array.h 00:04:29.225 TEST_HEADER include/spdk/bit_pool.h 00:04:29.225 TEST_HEADER include/spdk/blob_bdev.h 00:04:29.225 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:29.225 TEST_HEADER include/spdk/blobfs.h 00:04:29.225 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:29.225 TEST_HEADER include/spdk/blob.h 00:04:29.225 TEST_HEADER include/spdk/conf.h 00:04:29.225 TEST_HEADER include/spdk/config.h 00:04:29.225 TEST_HEADER include/spdk/cpuset.h 00:04:29.225 TEST_HEADER include/spdk/crc16.h 00:04:29.225 TEST_HEADER include/spdk/crc32.h 00:04:29.226 TEST_HEADER include/spdk/crc64.h 00:04:29.226 TEST_HEADER include/spdk/dif.h 00:04:29.226 TEST_HEADER include/spdk/dma.h 00:04:29.226 TEST_HEADER include/spdk/endian.h 00:04:29.226 TEST_HEADER include/spdk/env_dpdk.h 00:04:29.226 TEST_HEADER include/spdk/env.h 00:04:29.226 TEST_HEADER include/spdk/event.h 00:04:29.226 TEST_HEADER include/spdk/fd_group.h 00:04:29.226 TEST_HEADER include/spdk/fd.h 00:04:29.226 TEST_HEADER include/spdk/file.h 00:04:29.226 TEST_HEADER include/spdk/fsdev.h 00:04:29.226 TEST_HEADER include/spdk/fsdev_module.h 00:04:29.226 TEST_HEADER include/spdk/ftl.h 00:04:29.226 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:29.226 TEST_HEADER include/spdk/gpt_spec.h 00:04:29.226 TEST_HEADER include/spdk/hexlify.h 00:04:29.226 TEST_HEADER include/spdk/histogram_data.h 00:04:29.226 TEST_HEADER include/spdk/idxd.h 00:04:29.226 TEST_HEADER include/spdk/idxd_spec.h 00:04:29.226 TEST_HEADER include/spdk/init.h 00:04:29.226 TEST_HEADER include/spdk/ioat.h 00:04:29.226 CC test/thread/poller_perf/poller_perf.o 00:04:29.226 TEST_HEADER include/spdk/ioat_spec.h 00:04:29.226 CC examples/util/zipf/zipf.o 00:04:29.226 TEST_HEADER include/spdk/iscsi_spec.h 00:04:29.226 CC examples/ioat/perf/perf.o 00:04:29.226 TEST_HEADER include/spdk/json.h 00:04:29.226 TEST_HEADER include/spdk/jsonrpc.h 00:04:29.226 TEST_HEADER include/spdk/keyring.h 00:04:29.226 TEST_HEADER include/spdk/keyring_module.h 00:04:29.226 TEST_HEADER include/spdk/likely.h 00:04:29.226 TEST_HEADER include/spdk/log.h 00:04:29.226 TEST_HEADER include/spdk/lvol.h 00:04:29.226 TEST_HEADER include/spdk/md5.h 00:04:29.226 TEST_HEADER include/spdk/memory.h 00:04:29.226 TEST_HEADER include/spdk/mmio.h 00:04:29.226 CC test/app/bdev_svc/bdev_svc.o 00:04:29.226 TEST_HEADER include/spdk/nbd.h 00:04:29.226 TEST_HEADER include/spdk/net.h 00:04:29.226 TEST_HEADER include/spdk/notify.h 00:04:29.226 TEST_HEADER include/spdk/nvme.h 00:04:29.226 TEST_HEADER include/spdk/nvme_intel.h 00:04:29.226 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:29.226 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:29.226 TEST_HEADER include/spdk/nvme_spec.h 00:04:29.226 TEST_HEADER include/spdk/nvme_zns.h 00:04:29.226 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:29.226 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:29.226 TEST_HEADER include/spdk/nvmf.h 00:04:29.226 TEST_HEADER include/spdk/nvmf_spec.h 00:04:29.226 TEST_HEADER include/spdk/nvmf_transport.h 00:04:29.226 TEST_HEADER include/spdk/opal.h 00:04:29.226 TEST_HEADER include/spdk/opal_spec.h 00:04:29.226 TEST_HEADER include/spdk/pci_ids.h 00:04:29.226 CC test/dma/test_dma/test_dma.o 00:04:29.226 TEST_HEADER include/spdk/pipe.h 00:04:29.226 TEST_HEADER include/spdk/queue.h 00:04:29.226 TEST_HEADER include/spdk/reduce.h 00:04:29.226 TEST_HEADER include/spdk/rpc.h 00:04:29.226 TEST_HEADER include/spdk/scheduler.h 00:04:29.226 TEST_HEADER include/spdk/scsi.h 00:04:29.226 TEST_HEADER include/spdk/scsi_spec.h 00:04:29.226 TEST_HEADER include/spdk/sock.h 00:04:29.226 TEST_HEADER include/spdk/stdinc.h 00:04:29.226 TEST_HEADER include/spdk/string.h 00:04:29.226 TEST_HEADER include/spdk/thread.h 00:04:29.226 CC test/env/mem_callbacks/mem_callbacks.o 00:04:29.226 TEST_HEADER include/spdk/trace.h 00:04:29.226 TEST_HEADER include/spdk/trace_parser.h 00:04:29.226 TEST_HEADER include/spdk/tree.h 00:04:29.226 TEST_HEADER include/spdk/ublk.h 00:04:29.226 TEST_HEADER include/spdk/util.h 00:04:29.226 TEST_HEADER include/spdk/uuid.h 00:04:29.226 TEST_HEADER include/spdk/version.h 00:04:29.226 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:29.226 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:29.226 TEST_HEADER include/spdk/vhost.h 00:04:29.226 TEST_HEADER include/spdk/vmd.h 00:04:29.226 TEST_HEADER include/spdk/xor.h 00:04:29.226 TEST_HEADER include/spdk/zipf.h 00:04:29.226 CXX test/cpp_headers/accel.o 00:04:29.226 LINK poller_perf 00:04:29.226 LINK rpc_client_test 00:04:29.226 LINK interrupt_tgt 00:04:29.226 LINK zipf 00:04:29.226 LINK bdev_svc 00:04:29.226 LINK ioat_perf 00:04:29.484 LINK spdk_trace 00:04:29.484 CXX test/cpp_headers/accel_module.o 00:04:29.484 CXX test/cpp_headers/assert.o 00:04:29.484 CC test/app/histogram_perf/histogram_perf.o 00:04:29.484 CC test/app/jsoncat/jsoncat.o 00:04:29.484 CXX test/cpp_headers/barrier.o 00:04:29.484 CC examples/ioat/verify/verify.o 00:04:29.484 CC app/trace_record/trace_record.o 00:04:29.484 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:29.484 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:29.484 LINK jsoncat 00:04:29.742 LINK histogram_perf 00:04:29.742 CXX test/cpp_headers/base64.o 00:04:29.742 LINK test_dma 00:04:29.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:29.742 CXX test/cpp_headers/bdev.o 00:04:29.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:29.742 LINK mem_callbacks 00:04:29.742 LINK verify 00:04:29.742 LINK spdk_trace_record 00:04:29.742 CXX test/cpp_headers/bdev_module.o 00:04:30.000 LINK nvme_fuzz 00:04:30.000 CC test/env/vtophys/vtophys.o 00:04:30.000 CC examples/thread/thread/thread_ex.o 00:04:30.000 CC examples/sock/hello_world/hello_sock.o 00:04:30.000 CXX test/cpp_headers/bdev_zone.o 00:04:30.000 CC test/event/event_perf/event_perf.o 00:04:30.000 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.000 CC app/nvmf_tgt/nvmf_main.o 00:04:30.000 LINK event_perf 00:04:30.258 LINK vtophys 00:04:30.258 CC examples/vmd/led/led.o 00:04:30.258 CXX test/cpp_headers/bit_array.o 00:04:30.258 LINK lsvmd 00:04:30.258 LINK nvmf_tgt 00:04:30.258 LINK vhost_fuzz 00:04:30.258 LINK thread 00:04:30.258 LINK hello_sock 00:04:30.258 LINK led 00:04:30.258 CXX test/cpp_headers/bit_pool.o 00:04:30.258 CXX test/cpp_headers/blob_bdev.o 00:04:30.258 CC test/event/reactor/reactor.o 00:04:30.258 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:30.258 CC test/env/memory/memory_ut.o 00:04:30.516 CC test/env/pci/pci_ut.o 00:04:30.516 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.516 LINK reactor 00:04:30.516 CXX test/cpp_headers/blobfs_bdev.o 00:04:30.516 CC test/app/stub/stub.o 00:04:30.516 LINK env_dpdk_post_init 00:04:30.516 CC examples/idxd/perf/perf.o 00:04:30.516 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:30.516 LINK iscsi_tgt 00:04:30.516 CC test/event/reactor_perf/reactor_perf.o 00:04:30.774 CXX test/cpp_headers/blobfs.o 00:04:30.774 LINK stub 00:04:30.774 LINK reactor_perf 00:04:30.774 CXX test/cpp_headers/blob.o 00:04:30.774 CXX test/cpp_headers/conf.o 00:04:30.774 LINK hello_fsdev 00:04:30.774 CC app/spdk_tgt/spdk_tgt.o 00:04:30.774 LINK idxd_perf 00:04:30.774 LINK pci_ut 00:04:30.774 CC test/event/app_repeat/app_repeat.o 00:04:31.032 CC app/spdk_lspci/spdk_lspci.o 00:04:31.032 CXX test/cpp_headers/config.o 00:04:31.032 CXX test/cpp_headers/cpuset.o 00:04:31.032 CC app/spdk_nvme_perf/perf.o 00:04:31.032 LINK spdk_tgt 00:04:31.032 CC app/spdk_nvme_identify/identify.o 00:04:31.032 LINK app_repeat 00:04:31.032 LINK iscsi_fuzz 00:04:31.032 LINK spdk_lspci 00:04:31.032 CXX test/cpp_headers/crc16.o 00:04:31.032 CXX test/cpp_headers/crc32.o 00:04:31.032 CC examples/accel/perf/accel_perf.o 00:04:31.289 CC app/spdk_nvme_discover/discovery_aer.o 00:04:31.289 CXX test/cpp_headers/crc64.o 00:04:31.289 CXX test/cpp_headers/dif.o 00:04:31.289 CC app/spdk_top/spdk_top.o 00:04:31.289 CC test/event/scheduler/scheduler.o 00:04:31.289 CC app/vhost/vhost.o 00:04:31.289 CXX test/cpp_headers/dma.o 00:04:31.289 LINK spdk_nvme_discover 00:04:31.547 CC app/spdk_dd/spdk_dd.o 00:04:31.547 LINK memory_ut 00:04:31.547 LINK scheduler 00:04:31.547 LINK vhost 00:04:31.547 CXX test/cpp_headers/endian.o 00:04:31.547 LINK accel_perf 00:04:31.806 CXX test/cpp_headers/env_dpdk.o 00:04:31.806 CXX test/cpp_headers/env.o 00:04:31.806 CC app/fio/nvme/fio_plugin.o 00:04:31.806 LINK spdk_dd 00:04:31.806 CC app/fio/bdev/fio_plugin.o 00:04:31.806 LINK spdk_nvme_identify 00:04:31.806 CC examples/blob/hello_world/hello_blob.o 00:04:31.806 CXX test/cpp_headers/event.o 00:04:31.806 CXX test/cpp_headers/fd_group.o 00:04:31.806 CC examples/blob/cli/blobcli.o 00:04:31.806 LINK spdk_nvme_perf 00:04:32.064 CXX test/cpp_headers/fd.o 00:04:32.064 CC test/nvme/aer/aer.o 00:04:32.064 LINK hello_blob 00:04:32.064 CC test/nvme/reset/reset.o 00:04:32.064 CC test/nvme/sgl/sgl.o 00:04:32.064 LINK spdk_top 00:04:32.064 CXX test/cpp_headers/file.o 00:04:32.322 LINK spdk_nvme 00:04:32.322 CXX test/cpp_headers/fsdev.o 00:04:32.322 LINK spdk_bdev 00:04:32.322 LINK aer 00:04:32.322 CC test/accel/dif/dif.o 00:04:32.322 LINK reset 00:04:32.322 CXX test/cpp_headers/fsdev_module.o 00:04:32.322 CXX test/cpp_headers/ftl.o 00:04:32.322 LINK sgl 00:04:32.322 CXX test/cpp_headers/fuse_dispatcher.o 00:04:32.322 CC test/blobfs/mkfs/mkfs.o 00:04:32.322 LINK blobcli 00:04:32.580 CXX test/cpp_headers/gpt_spec.o 00:04:32.580 CC test/nvme/e2edp/nvme_dp.o 00:04:32.580 CXX test/cpp_headers/hexlify.o 00:04:32.580 CXX test/cpp_headers/histogram_data.o 00:04:32.580 CC test/lvol/esnap/esnap.o 00:04:32.580 CXX test/cpp_headers/idxd.o 00:04:32.580 LINK mkfs 00:04:32.580 CC examples/nvme/hello_world/hello_world.o 00:04:32.580 CXX test/cpp_headers/idxd_spec.o 00:04:32.580 CXX test/cpp_headers/init.o 00:04:32.838 LINK nvme_dp 00:04:32.838 CC test/nvme/err_injection/err_injection.o 00:04:32.838 CC test/nvme/overhead/overhead.o 00:04:32.838 CXX test/cpp_headers/ioat.o 00:04:32.838 LINK hello_world 00:04:32.838 CC test/nvme/startup/startup.o 00:04:32.838 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.838 CC test/nvme/reserve/reserve.o 00:04:32.838 CXX test/cpp_headers/ioat_spec.o 00:04:32.838 LINK err_injection 00:04:33.095 LINK startup 00:04:33.095 LINK overhead 00:04:33.095 CXX test/cpp_headers/iscsi_spec.o 00:04:33.095 CC examples/nvme/reconnect/reconnect.o 00:04:33.095 LINK hello_bdev 00:04:33.095 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.095 LINK dif 00:04:33.095 LINK reserve 00:04:33.095 CXX test/cpp_headers/json.o 00:04:33.095 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.095 CC examples/nvme/arbitration/arbitration.o 00:04:33.095 CC examples/nvme/hotplug/hotplug.o 00:04:33.354 CXX test/cpp_headers/jsonrpc.o 00:04:33.354 CC test/nvme/simple_copy/simple_copy.o 00:04:33.354 CC test/nvme/connect_stress/connect_stress.o 00:04:33.354 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.354 LINK reconnect 00:04:33.354 CXX test/cpp_headers/keyring.o 00:04:33.354 LINK hotplug 00:04:33.354 LINK simple_copy 00:04:33.354 LINK connect_stress 00:04:33.354 LINK cmb_copy 00:04:33.354 CXX test/cpp_headers/keyring_module.o 00:04:33.611 LINK arbitration 00:04:33.611 CC examples/nvme/abort/abort.o 00:04:33.611 CXX test/cpp_headers/likely.o 00:04:33.611 LINK nvme_manage 00:04:33.611 CXX test/cpp_headers/log.o 00:04:33.611 CC test/nvme/boot_partition/boot_partition.o 00:04:33.611 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.611 CC test/nvme/compliance/nvme_compliance.o 00:04:33.611 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:33.611 CC test/nvme/fdp/fdp.o 00:04:33.611 LINK bdevperf 00:04:33.869 CXX test/cpp_headers/lvol.o 00:04:33.869 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.869 LINK abort 00:04:33.869 LINK boot_partition 00:04:33.869 LINK doorbell_aers 00:04:33.869 LINK fused_ordering 00:04:33.869 CXX test/cpp_headers/md5.o 00:04:33.869 CC test/nvme/cuse/cuse.o 00:04:33.869 CXX test/cpp_headers/memory.o 00:04:33.869 CXX test/cpp_headers/mmio.o 00:04:33.869 LINK nvme_compliance 00:04:33.869 CXX test/cpp_headers/nbd.o 00:04:33.869 LINK pmr_persistence 00:04:33.869 CXX test/cpp_headers/net.o 00:04:34.126 CXX test/cpp_headers/notify.o 00:04:34.126 LINK fdp 00:04:34.126 CXX test/cpp_headers/nvme.o 00:04:34.126 CXX test/cpp_headers/nvme_intel.o 00:04:34.126 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.126 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:34.126 CXX test/cpp_headers/nvme_spec.o 00:04:34.126 CXX test/cpp_headers/nvme_zns.o 00:04:34.126 CXX test/cpp_headers/nvmf_cmd.o 00:04:34.393 CC examples/nvmf/nvmf/nvmf.o 00:04:34.393 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:34.393 CXX test/cpp_headers/nvmf.o 00:04:34.394 CXX test/cpp_headers/nvmf_spec.o 00:04:34.394 CC test/bdev/bdevio/bdevio.o 00:04:34.394 CXX test/cpp_headers/nvmf_transport.o 00:04:34.394 CXX test/cpp_headers/opal.o 00:04:34.394 CXX test/cpp_headers/opal_spec.o 00:04:34.394 CXX test/cpp_headers/pci_ids.o 00:04:34.394 CXX test/cpp_headers/pipe.o 00:04:34.394 CXX test/cpp_headers/queue.o 00:04:34.394 CXX test/cpp_headers/reduce.o 00:04:34.670 CXX test/cpp_headers/rpc.o 00:04:34.670 CXX test/cpp_headers/scheduler.o 00:04:34.670 LINK nvmf 00:04:34.670 CXX test/cpp_headers/scsi.o 00:04:34.670 CXX test/cpp_headers/scsi_spec.o 00:04:34.670 CXX test/cpp_headers/sock.o 00:04:34.670 LINK bdevio 00:04:34.670 CXX test/cpp_headers/stdinc.o 00:04:34.670 CXX test/cpp_headers/string.o 00:04:34.670 CXX test/cpp_headers/thread.o 00:04:34.670 CXX test/cpp_headers/trace.o 00:04:34.670 CXX test/cpp_headers/trace_parser.o 00:04:34.670 CXX test/cpp_headers/tree.o 00:04:34.670 CXX test/cpp_headers/ublk.o 00:04:34.670 CXX test/cpp_headers/util.o 00:04:34.928 CXX test/cpp_headers/uuid.o 00:04:34.928 CXX test/cpp_headers/version.o 00:04:34.928 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.928 CXX test/cpp_headers/vfio_user_spec.o 00:04:34.928 CXX test/cpp_headers/vhost.o 00:04:34.928 CXX test/cpp_headers/vmd.o 00:04:34.928 CXX test/cpp_headers/xor.o 00:04:34.928 CXX test/cpp_headers/zipf.o 00:04:34.928 LINK cuse 00:04:37.458 LINK esnap 00:04:37.716 ************************************ 00:04:37.716 END TEST make 00:04:37.716 ************************************ 00:04:37.716 00:04:37.716 real 1m13.955s 00:04:37.716 user 6m37.532s 00:04:37.716 sys 1m14.860s 00:04:37.716 10:03:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:37.716 10:03:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:37.716 10:03:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:37.716 10:03:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:37.716 10:03:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:37.716 10:03:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.716 10:03:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:37.716 10:03:40 -- pm/common@44 -- $ pid=5062 00:04:37.716 10:03:40 -- pm/common@50 -- $ kill -TERM 5062 00:04:37.716 10:03:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.716 10:03:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:37.716 10:03:40 -- pm/common@44 -- $ pid=5063 00:04:37.716 10:03:40 -- pm/common@50 -- $ kill -TERM 5063 00:04:37.716 10:03:40 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.716 10:03:40 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.716 10:03:40 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.716 10:03:40 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.716 10:03:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.716 10:03:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.716 10:03:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.716 10:03:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.716 10:03:40 -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.716 10:03:40 -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.716 10:03:40 -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.716 10:03:40 -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.716 10:03:40 -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.716 10:03:40 -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.716 10:03:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.716 10:03:40 -- scripts/common.sh@344 -- # case "$op" in 00:04:37.716 10:03:40 -- scripts/common.sh@345 -- # : 1 00:04:37.716 10:03:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.716 10:03:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.716 10:03:40 -- scripts/common.sh@365 -- # decimal 1 00:04:37.716 10:03:40 -- scripts/common.sh@353 -- # local d=1 00:04:37.716 10:03:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.716 10:03:40 -- scripts/common.sh@355 -- # echo 1 00:04:37.716 10:03:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.716 10:03:40 -- scripts/common.sh@366 -- # decimal 2 00:04:37.716 10:03:40 -- scripts/common.sh@353 -- # local d=2 00:04:37.716 10:03:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.716 10:03:40 -- scripts/common.sh@355 -- # echo 2 00:04:37.716 10:03:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.716 10:03:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.716 10:03:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.716 10:03:40 -- scripts/common.sh@368 -- # return 0 00:04:37.716 10:03:40 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.716 10:03:40 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.716 --rc genhtml_branch_coverage=1 00:04:37.717 --rc genhtml_function_coverage=1 00:04:37.717 --rc genhtml_legend=1 00:04:37.717 --rc geninfo_all_blocks=1 00:04:37.717 --rc geninfo_unexecuted_blocks=1 00:04:37.717 00:04:37.717 ' 00:04:37.717 10:03:40 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.717 --rc genhtml_branch_coverage=1 00:04:37.717 --rc genhtml_function_coverage=1 00:04:37.717 --rc genhtml_legend=1 00:04:37.717 --rc geninfo_all_blocks=1 00:04:37.717 --rc geninfo_unexecuted_blocks=1 00:04:37.717 00:04:37.717 ' 00:04:37.717 10:03:40 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.717 --rc genhtml_branch_coverage=1 00:04:37.717 --rc genhtml_function_coverage=1 00:04:37.717 --rc genhtml_legend=1 00:04:37.717 --rc geninfo_all_blocks=1 00:04:37.717 --rc geninfo_unexecuted_blocks=1 00:04:37.717 00:04:37.717 ' 00:04:37.717 10:03:40 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.717 --rc genhtml_branch_coverage=1 00:04:37.717 --rc genhtml_function_coverage=1 00:04:37.717 --rc genhtml_legend=1 00:04:37.717 --rc geninfo_all_blocks=1 00:04:37.717 --rc geninfo_unexecuted_blocks=1 00:04:37.717 00:04:37.717 ' 00:04:37.717 10:03:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.717 10:03:40 -- nvmf/common.sh@7 -- # uname -s 00:04:37.717 10:03:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.717 10:03:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.717 10:03:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.717 10:03:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.717 10:03:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.717 10:03:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.717 10:03:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.717 10:03:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.717 10:03:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.717 10:03:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.717 10:03:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f468df3-627b-414d-ac31-aa66f29c0fd5 00:04:37.717 10:03:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8f468df3-627b-414d-ac31-aa66f29c0fd5 00:04:37.717 10:03:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.717 10:03:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.717 10:03:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.717 10:03:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.717 10:03:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.717 10:03:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.717 10:03:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.717 10:03:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.717 10:03:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.717 10:03:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.717 10:03:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.717 10:03:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.717 10:03:40 -- paths/export.sh@5 -- # export PATH 00:04:37.717 10:03:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.717 10:03:40 -- nvmf/common.sh@51 -- # : 0 00:04:37.717 10:03:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.717 10:03:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.717 10:03:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.717 10:03:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.717 10:03:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.717 10:03:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.717 10:03:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.717 10:03:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.717 10:03:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.717 10:03:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:37.975 10:03:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:37.975 10:03:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:37.975 10:03:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:37.975 10:03:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.975 10:03:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:37.975 10:03:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.975 10:03:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:37.975 10:03:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:37.975 10:03:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:37.975 10:03:40 -- spdk/autotest.sh@48 -- # udevadm_pid=54299 00:04:37.975 10:03:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:37.975 10:03:40 -- pm/common@17 -- # local monitor 00:04:37.975 10:03:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:37.975 10:03:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.975 10:03:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.975 10:03:40 -- pm/common@25 -- # sleep 1 00:04:37.975 10:03:40 -- pm/common@21 -- # date +%s 00:04:37.975 10:03:40 -- pm/common@21 -- # date +%s 00:04:37.975 10:03:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729159420 00:04:37.975 10:03:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729159420 00:04:37.975 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729159420_collect-vmstat.pm.log 00:04:37.975 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729159420_collect-cpu-load.pm.log 00:04:38.909 10:03:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:38.909 10:03:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:38.909 10:03:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:38.909 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 10:03:41 -- spdk/autotest.sh@59 -- # create_test_list 00:04:38.909 10:03:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:38.909 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 10:03:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:38.909 10:03:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:38.909 10:03:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:38.909 10:03:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:38.909 10:03:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:38.909 10:03:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:38.909 10:03:41 -- common/autotest_common.sh@1455 -- # uname 00:04:38.909 10:03:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:38.909 10:03:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:38.909 10:03:41 -- common/autotest_common.sh@1475 -- # uname 00:04:38.909 10:03:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:38.909 10:03:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:38.909 10:03:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:38.909 lcov: LCOV version 1.15 00:04:38.909 10:03:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:53.779 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.779 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:08.724 10:04:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:08.724 10:04:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.724 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.724 10:04:10 -- spdk/autotest.sh@78 -- # rm -f 00:05:08.724 10:04:10 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.724 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:08.724 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:08.724 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:08.724 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:08.724 10:04:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:08.724 10:04:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:08.724 10:04:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:08.724 10:04:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:08.724 10:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:08.724 10:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:08.724 10:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:08.724 10:04:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:08.724 10:04:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.724 10:04:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.724 10:04:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:08.724 10:04:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:08.724 10:04:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:08.724 No valid GPT data, bailing 00:05:08.724 10:04:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.724 10:04:11 -- scripts/common.sh@394 -- # pt= 00:05:08.724 10:04:11 -- scripts/common.sh@395 -- # return 1 00:05:08.724 10:04:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:08.724 1+0 records in 00:05:08.724 1+0 records out 00:05:08.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255401 s, 41.1 MB/s 00:05:08.724 10:04:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.724 10:04:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.724 10:04:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:08.724 10:04:11 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:08.724 10:04:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:08.982 No valid GPT data, bailing 00:05:08.982 10:04:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:08.982 10:04:11 -- scripts/common.sh@394 -- # pt= 00:05:08.982 10:04:11 -- scripts/common.sh@395 -- # return 1 00:05:08.982 10:04:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:08.982 1+0 records in 00:05:08.982 1+0 records out 00:05:08.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041664 s, 252 MB/s 00:05:08.982 10:04:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.982 10:04:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.982 10:04:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:08.982 10:04:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:08.982 10:04:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:08.982 No valid GPT data, bailing 00:05:08.982 10:04:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:08.982 10:04:11 -- scripts/common.sh@394 -- # pt= 00:05:08.982 10:04:11 -- scripts/common.sh@395 -- # return 1 00:05:08.982 10:04:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:08.982 1+0 records in 00:05:08.982 1+0 records out 00:05:08.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556825 s, 188 MB/s 00:05:08.982 10:04:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.982 10:04:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.982 10:04:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:08.982 10:04:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:08.982 10:04:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:08.982 No valid GPT data, bailing 00:05:08.982 10:04:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:08.982 10:04:12 -- scripts/common.sh@394 -- # pt= 00:05:08.982 10:04:12 -- scripts/common.sh@395 -- # return 1 00:05:08.982 10:04:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:08.982 1+0 records in 00:05:08.982 1+0 records out 00:05:08.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434369 s, 241 MB/s 00:05:08.982 10:04:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.982 10:04:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.982 10:04:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:08.982 10:04:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:08.982 10:04:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:08.982 No valid GPT data, bailing 00:05:08.982 10:04:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:09.240 10:04:12 -- scripts/common.sh@394 -- # pt= 00:05:09.240 10:04:12 -- scripts/common.sh@395 -- # return 1 00:05:09.240 10:04:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:09.240 1+0 records in 00:05:09.240 1+0 records out 00:05:09.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527636 s, 199 MB/s 00:05:09.240 10:04:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.240 10:04:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.240 10:04:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:09.240 10:04:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:09.240 10:04:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:09.240 No valid GPT data, bailing 00:05:09.240 10:04:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:09.240 10:04:12 -- scripts/common.sh@394 -- # pt= 00:05:09.240 10:04:12 -- scripts/common.sh@395 -- # return 1 00:05:09.240 10:04:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:09.240 1+0 records in 00:05:09.240 1+0 records out 00:05:09.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385427 s, 272 MB/s 00:05:09.240 10:04:12 -- spdk/autotest.sh@105 -- # sync 00:05:09.498 10:04:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.498 10:04:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.498 10:04:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.395 10:04:14 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.395 10:04:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.395 10:04:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.395 10:04:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.911 Hugepages 00:05:11.911 node hugesize free / total 00:05:11.911 node0 1048576kB 0 / 0 00:05:11.911 node0 2048kB 0 / 0 00:05:11.911 00:05:11.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:12.168 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:12.168 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:12.168 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:12.168 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:12.168 10:04:15 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.168 10:04:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.168 10:04:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.168 10:04:15 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.305 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.305 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.305 10:04:16 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:14.689 10:04:17 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:14.689 10:04:17 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:14.689 10:04:17 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.689 10:04:17 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:14.689 10:04:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:14.689 10:04:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:14.689 10:04:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.689 10:04:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:14.689 10:04:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.689 10:04:17 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:14.689 10:04:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:14.689 10:04:17 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.950 Waiting for block devices as requested 00:05:14.950 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.211 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.211 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.211 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.487 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:20.487 10:04:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:20.487 10:04:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:20.487 10:04:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.487 10:04:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:20.487 10:04:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:20.488 10:04:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1541 -- # continue 00:05:20.488 10:04:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:20.488 10:04:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1541 -- # continue 00:05:20.488 10:04:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:20.488 10:04:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1541 -- # continue 00:05:20.488 10:04:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:20.488 10:04:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:20.488 10:04:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:20.488 10:04:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:20.488 10:04:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:20.488 10:04:23 -- common/autotest_common.sh@1541 -- # continue 00:05:20.488 10:04:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:20.488 10:04:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.488 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 10:04:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:20.488 10:04:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.488 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 10:04:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.316 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.316 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.316 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.575 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.575 10:04:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:21.575 10:04:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.575 10:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.575 10:04:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:21.575 10:04:24 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:21.575 10:04:24 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.575 10:04:24 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:21.575 10:04:24 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:21.575 10:04:24 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:21.575 10:04:24 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:21.575 10:04:24 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:21.575 10:04:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:21.575 10:04:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:21.575 10:04:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.575 10:04:24 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:21.575 10:04:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:21.575 10:04:24 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:21.575 10:04:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:21.575 10:04:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:21.575 10:04:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.575 10:04:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:21.575 10:04:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.575 10:04:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:21.575 10:04:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.575 10:04:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:21.575 10:04:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:21.575 10:04:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.575 10:04:24 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:21.575 10:04:24 -- common/autotest_common.sh@1570 -- # return 0 00:05:21.575 10:04:24 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:21.575 10:04:24 -- common/autotest_common.sh@1578 -- # return 0 00:05:21.575 10:04:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:21.575 10:04:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:21.575 10:04:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.575 10:04:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.575 10:04:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:21.575 10:04:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.575 10:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.575 10:04:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:21.575 10:04:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.575 10:04:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.575 10:04:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.575 10:04:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.575 ************************************ 00:05:21.575 START TEST env 00:05:21.575 ************************************ 00:05:21.575 10:04:24 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.835 * Looking for test storage... 00:05:21.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.835 10:04:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.835 10:04:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.835 10:04:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.835 10:04:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.835 10:04:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.835 10:04:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.835 10:04:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.835 10:04:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.835 10:04:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.835 10:04:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.835 10:04:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.835 10:04:24 env -- scripts/common.sh@344 -- # case "$op" in 00:05:21.835 10:04:24 env -- scripts/common.sh@345 -- # : 1 00:05:21.835 10:04:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.835 10:04:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.835 10:04:24 env -- scripts/common.sh@365 -- # decimal 1 00:05:21.835 10:04:24 env -- scripts/common.sh@353 -- # local d=1 00:05:21.835 10:04:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.835 10:04:24 env -- scripts/common.sh@355 -- # echo 1 00:05:21.835 10:04:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.835 10:04:24 env -- scripts/common.sh@366 -- # decimal 2 00:05:21.835 10:04:24 env -- scripts/common.sh@353 -- # local d=2 00:05:21.835 10:04:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.835 10:04:24 env -- scripts/common.sh@355 -- # echo 2 00:05:21.835 10:04:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.835 10:04:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.835 10:04:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.835 10:04:24 env -- scripts/common.sh@368 -- # return 0 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.835 --rc genhtml_branch_coverage=1 00:05:21.835 --rc genhtml_function_coverage=1 00:05:21.835 --rc genhtml_legend=1 00:05:21.835 --rc geninfo_all_blocks=1 00:05:21.835 --rc geninfo_unexecuted_blocks=1 00:05:21.835 00:05:21.835 ' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.835 --rc genhtml_branch_coverage=1 00:05:21.835 --rc genhtml_function_coverage=1 00:05:21.835 --rc genhtml_legend=1 00:05:21.835 --rc geninfo_all_blocks=1 00:05:21.835 --rc geninfo_unexecuted_blocks=1 00:05:21.835 00:05:21.835 ' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.835 --rc genhtml_branch_coverage=1 00:05:21.835 --rc genhtml_function_coverage=1 00:05:21.835 --rc genhtml_legend=1 00:05:21.835 --rc geninfo_all_blocks=1 00:05:21.835 --rc geninfo_unexecuted_blocks=1 00:05:21.835 00:05:21.835 ' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.835 --rc genhtml_branch_coverage=1 00:05:21.835 --rc genhtml_function_coverage=1 00:05:21.835 --rc genhtml_legend=1 00:05:21.835 --rc geninfo_all_blocks=1 00:05:21.835 --rc geninfo_unexecuted_blocks=1 00:05:21.835 00:05:21.835 ' 00:05:21.835 10:04:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.835 10:04:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.835 10:04:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.835 ************************************ 00:05:21.835 START TEST env_memory 00:05:21.835 ************************************ 00:05:21.835 10:04:24 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.835 00:05:21.835 00:05:21.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.835 http://cunit.sourceforge.net/ 00:05:21.835 00:05:21.835 00:05:21.835 Suite: memory 00:05:21.835 Test: alloc and free memory map ...[2024-10-17 10:04:24.837780] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:21.835 passed 00:05:21.835 Test: mem map translation ...[2024-10-17 10:04:24.876520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:21.835 [2024-10-17 10:04:24.876559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:21.835 [2024-10-17 10:04:24.876618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:21.835 [2024-10-17 10:04:24.876634] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.097 passed 00:05:22.097 Test: mem map registration ...[2024-10-17 10:04:24.944741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.097 [2024-10-17 10:04:24.944790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.097 passed 00:05:22.097 Test: mem map adjacent registrations ...passed 00:05:22.097 00:05:22.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.097 suites 1 1 n/a 0 0 00:05:22.097 tests 4 4 4 0 0 00:05:22.097 asserts 152 152 152 0 n/a 00:05:22.097 00:05:22.097 Elapsed time = 0.233 seconds 00:05:22.097 00:05:22.097 real 0m0.267s 00:05:22.097 user 0m0.243s 00:05:22.097 sys 0m0.018s 00:05:22.097 10:04:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.097 10:04:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.097 ************************************ 00:05:22.097 END TEST env_memory 00:05:22.097 ************************************ 00:05:22.097 10:04:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.097 10:04:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.097 10:04:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.097 10:04:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.097 ************************************ 00:05:22.097 START TEST env_vtophys 00:05:22.097 ************************************ 00:05:22.097 10:04:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.097 EAL: lib.eal log level changed from notice to debug 00:05:22.097 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 1 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 2 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 3 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 4 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 5 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 6 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 7 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 8 as core 0 on socket 0 00:05:22.097 EAL: Detected lcore 9 as core 0 on socket 0 00:05:22.097 EAL: Maximum logical cores by configuration: 128 00:05:22.097 EAL: Detected CPU lcores: 10 00:05:22.097 EAL: Detected NUMA nodes: 1 00:05:22.097 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:22.097 EAL: Detected shared linkage of DPDK 00:05:22.097 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.097 EAL: Selected IOVA mode 'PA' 00:05:22.097 EAL: Probing VFIO support... 00:05:22.097 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.097 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:22.097 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.097 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.097 EAL: Setting up physically contiguous memory... 00:05:22.097 EAL: Setting maximum number of open files to 524288 00:05:22.097 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.097 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.097 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.097 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.097 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.097 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.097 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.097 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.097 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.098 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.098 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.098 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.098 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.098 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.098 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.098 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.098 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.098 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.098 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.098 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.098 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.098 EAL: Hugepages will be freed exactly as allocated. 00:05:22.098 EAL: No shared files mode enabled, IPC is disabled 00:05:22.098 EAL: No shared files mode enabled, IPC is disabled 00:05:22.358 EAL: TSC frequency is ~2600000 KHz 00:05:22.358 EAL: Main lcore 0 is ready (tid=7f157a4f7a40;cpuset=[0]) 00:05:22.359 EAL: Trying to obtain current memory policy. 00:05:22.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.359 EAL: Restoring previous memory policy: 0 00:05:22.359 EAL: request: mp_malloc_sync 00:05:22.359 EAL: No shared files mode enabled, IPC is disabled 00:05:22.359 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.359 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.359 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.359 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:22.359 00:05:22.359 00:05:22.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.359 http://cunit.sourceforge.net/ 00:05:22.359 00:05:22.359 00:05:22.359 Suite: components_suite 00:05:22.620 Test: vtophys_malloc_test ...passed 00:05:22.620 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.620 EAL: Restoring previous memory policy: 4 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.620 EAL: Trying to obtain current memory policy. 00:05:22.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.620 EAL: Restoring previous memory policy: 4 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.620 EAL: Trying to obtain current memory policy. 00:05:22.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.620 EAL: Restoring previous memory policy: 4 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.620 EAL: Trying to obtain current memory policy. 00:05:22.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.620 EAL: Restoring previous memory policy: 4 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.620 EAL: Trying to obtain current memory policy. 00:05:22.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.620 EAL: Restoring previous memory policy: 4 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.620 EAL: request: mp_malloc_sync 00:05:22.620 EAL: No shared files mode enabled, IPC is disabled 00:05:22.620 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.882 EAL: Trying to obtain current memory policy. 00:05:22.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.882 EAL: Restoring previous memory policy: 4 00:05:22.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.882 EAL: request: mp_malloc_sync 00:05:22.882 EAL: No shared files mode enabled, IPC is disabled 00:05:22.882 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.882 EAL: request: mp_malloc_sync 00:05:22.882 EAL: No shared files mode enabled, IPC is disabled 00:05:22.882 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.882 EAL: Trying to obtain current memory policy. 00:05:22.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.882 EAL: Restoring previous memory policy: 4 00:05:22.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.882 EAL: request: mp_malloc_sync 00:05:22.882 EAL: No shared files mode enabled, IPC is disabled 00:05:22.882 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.141 EAL: request: mp_malloc_sync 00:05:23.141 EAL: No shared files mode enabled, IPC is disabled 00:05:23.141 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.141 EAL: Trying to obtain current memory policy. 00:05:23.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.141 EAL: Restoring previous memory policy: 4 00:05:23.141 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.141 EAL: request: mp_malloc_sync 00:05:23.141 EAL: No shared files mode enabled, IPC is disabled 00:05:23.141 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.707 EAL: request: mp_malloc_sync 00:05:23.707 EAL: No shared files mode enabled, IPC is disabled 00:05:23.707 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.707 EAL: Trying to obtain current memory policy. 00:05:23.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.965 EAL: Restoring previous memory policy: 4 00:05:23.965 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.965 EAL: request: mp_malloc_sync 00:05:23.965 EAL: No shared files mode enabled, IPC is disabled 00:05:23.965 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.531 EAL: request: mp_malloc_sync 00:05:24.531 EAL: No shared files mode enabled, IPC is disabled 00:05:24.531 EAL: Heap on socket 0 was shrunk by 514MB 00:05:25.096 EAL: Trying to obtain current memory policy. 00:05:25.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.096 EAL: Restoring previous memory policy: 4 00:05:25.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.096 EAL: request: mp_malloc_sync 00:05:25.096 EAL: No shared files mode enabled, IPC is disabled 00:05:25.096 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.471 EAL: request: mp_malloc_sync 00:05:26.471 EAL: No shared files mode enabled, IPC is disabled 00:05:26.471 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.067 passed 00:05:27.067 00:05:27.067 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.067 suites 1 1 n/a 0 0 00:05:27.067 tests 2 2 2 0 0 00:05:27.067 asserts 5845 5845 5845 0 n/a 00:05:27.067 00:05:27.067 Elapsed time = 4.765 seconds 00:05:27.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.067 EAL: request: mp_malloc_sync 00:05:27.067 EAL: No shared files mode enabled, IPC is disabled 00:05:27.067 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.067 EAL: No shared files mode enabled, IPC is disabled 00:05:27.067 EAL: No shared files mode enabled, IPC is disabled 00:05:27.067 EAL: No shared files mode enabled, IPC is disabled 00:05:27.067 ************************************ 00:05:27.067 END TEST env_vtophys 00:05:27.067 ************************************ 00:05:27.067 00:05:27.067 real 0m5.019s 00:05:27.067 user 0m4.240s 00:05:27.067 sys 0m0.629s 00:05:27.067 10:04:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.067 10:04:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 10:04:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.328 10:04:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.328 10:04:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.328 10:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 ************************************ 00:05:27.328 START TEST env_pci 00:05:27.328 ************************************ 00:05:27.328 10:04:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.328 00:05:27.328 00:05:27.328 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.328 http://cunit.sourceforge.net/ 00:05:27.328 00:05:27.328 00:05:27.328 Suite: pci 00:05:27.328 Test: pci_hook ...[2024-10-17 10:04:30.191561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57077 has claimed it 00:05:27.328 EAL: Cannot find device (10000:00:01.0) 00:05:27.328 EAL: Failed to attach device on primary process 00:05:27.328 passed 00:05:27.328 00:05:27.328 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.328 suites 1 1 n/a 0 0 00:05:27.328 tests 1 1 1 0 0 00:05:27.328 asserts 25 25 25 0 n/a 00:05:27.328 00:05:27.328 Elapsed time = 0.007 seconds 00:05:27.328 00:05:27.328 real 0m0.075s 00:05:27.328 user 0m0.036s 00:05:27.328 sys 0m0.037s 00:05:27.328 10:04:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.328 ************************************ 00:05:27.328 END TEST env_pci 00:05:27.328 ************************************ 00:05:27.328 10:04:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 10:04:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.328 10:04:30 env -- env/env.sh@15 -- # uname 00:05:27.328 10:04:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.328 10:04:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.328 10:04:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.328 10:04:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:27.328 10:04:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.328 10:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 ************************************ 00:05:27.328 START TEST env_dpdk_post_init 00:05:27.328 ************************************ 00:05:27.328 10:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.328 EAL: Detected CPU lcores: 10 00:05:27.328 EAL: Detected NUMA nodes: 1 00:05:27.328 EAL: Detected shared linkage of DPDK 00:05:27.328 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.328 EAL: Selected IOVA mode 'PA' 00:05:27.587 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.587 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:27.587 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:27.587 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:27.587 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:27.587 Starting DPDK initialization... 00:05:27.587 Starting SPDK post initialization... 00:05:27.587 SPDK NVMe probe 00:05:27.587 Attaching to 0000:00:10.0 00:05:27.587 Attaching to 0000:00:11.0 00:05:27.587 Attaching to 0000:00:12.0 00:05:27.587 Attaching to 0000:00:13.0 00:05:27.587 Attached to 0000:00:10.0 00:05:27.587 Attached to 0000:00:11.0 00:05:27.587 Attached to 0000:00:13.0 00:05:27.587 Attached to 0000:00:12.0 00:05:27.587 Cleaning up... 00:05:27.587 ************************************ 00:05:27.587 END TEST env_dpdk_post_init 00:05:27.587 ************************************ 00:05:27.587 00:05:27.587 real 0m0.236s 00:05:27.587 user 0m0.075s 00:05:27.587 sys 0m0.064s 00:05:27.587 10:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.587 10:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.587 10:04:30 env -- env/env.sh@26 -- # uname 00:05:27.587 10:04:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:27.587 10:04:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.587 10:04:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.587 10:04:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.587 10:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.587 ************************************ 00:05:27.587 START TEST env_mem_callbacks 00:05:27.587 ************************************ 00:05:27.587 10:04:30 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.587 EAL: Detected CPU lcores: 10 00:05:27.587 EAL: Detected NUMA nodes: 1 00:05:27.587 EAL: Detected shared linkage of DPDK 00:05:27.587 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.587 EAL: Selected IOVA mode 'PA' 00:05:27.846 00:05:27.846 00:05:27.847 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.847 http://cunit.sourceforge.net/ 00:05:27.847 00:05:27.847 00:05:27.847 Suite: memory 00:05:27.847 Test: test ... 00:05:27.847 register 0x200000200000 2097152 00:05:27.847 malloc 3145728 00:05:27.847 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.847 register 0x200000400000 4194304 00:05:27.847 buf 0x2000004fffc0 len 3145728 PASSED 00:05:27.847 malloc 64 00:05:27.847 buf 0x2000004ffec0 len 64 PASSED 00:05:27.847 malloc 4194304 00:05:27.847 register 0x200000800000 6291456 00:05:27.847 buf 0x2000009fffc0 len 4194304 PASSED 00:05:27.847 free 0x2000004fffc0 3145728 00:05:27.847 free 0x2000004ffec0 64 00:05:27.847 unregister 0x200000400000 4194304 PASSED 00:05:27.847 free 0x2000009fffc0 4194304 00:05:27.847 unregister 0x200000800000 6291456 PASSED 00:05:27.847 malloc 8388608 00:05:27.847 register 0x200000400000 10485760 00:05:27.847 buf 0x2000005fffc0 len 8388608 PASSED 00:05:27.847 free 0x2000005fffc0 8388608 00:05:27.847 unregister 0x200000400000 10485760 PASSED 00:05:27.847 passed 00:05:27.847 00:05:27.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.847 suites 1 1 n/a 0 0 00:05:27.847 tests 1 1 1 0 0 00:05:27.847 asserts 15 15 15 0 n/a 00:05:27.847 00:05:27.847 Elapsed time = 0.047 seconds 00:05:27.847 00:05:27.847 real 0m0.219s 00:05:27.847 user 0m0.069s 00:05:27.847 sys 0m0.047s 00:05:27.847 10:04:30 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.847 10:04:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:27.847 ************************************ 00:05:27.847 END TEST env_mem_callbacks 00:05:27.847 ************************************ 00:05:27.847 00:05:27.847 real 0m6.221s 00:05:27.847 user 0m4.838s 00:05:27.847 sys 0m0.984s 00:05:27.847 10:04:30 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.847 10:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.847 ************************************ 00:05:27.847 END TEST env 00:05:27.847 ************************************ 00:05:27.847 10:04:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:27.847 10:04:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.847 10:04:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.847 10:04:30 -- common/autotest_common.sh@10 -- # set +x 00:05:27.847 ************************************ 00:05:27.847 START TEST rpc 00:05:27.847 ************************************ 00:05:27.847 10:04:30 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.106 * Looking for test storage... 00:05:28.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.106 10:04:30 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.106 10:04:30 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.106 10:04:30 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.106 10:04:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.106 10:04:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.106 10:04:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.106 10:04:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.106 10:04:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.106 10:04:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.106 10:04:31 rpc -- scripts/common.sh@345 -- # : 1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.106 10:04:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.106 10:04:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.106 10:04:31 rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.106 10:04:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.106 10:04:31 rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.106 10:04:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.106 10:04:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.106 10:04:31 rpc -- scripts/common.sh@368 -- # return 0 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.106 --rc genhtml_branch_coverage=1 00:05:28.106 --rc genhtml_function_coverage=1 00:05:28.106 --rc genhtml_legend=1 00:05:28.106 --rc geninfo_all_blocks=1 00:05:28.106 --rc geninfo_unexecuted_blocks=1 00:05:28.106 00:05:28.106 ' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.106 --rc genhtml_branch_coverage=1 00:05:28.106 --rc genhtml_function_coverage=1 00:05:28.106 --rc genhtml_legend=1 00:05:28.106 --rc geninfo_all_blocks=1 00:05:28.106 --rc geninfo_unexecuted_blocks=1 00:05:28.106 00:05:28.106 ' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.106 --rc genhtml_branch_coverage=1 00:05:28.106 --rc genhtml_function_coverage=1 00:05:28.106 --rc genhtml_legend=1 00:05:28.106 --rc geninfo_all_blocks=1 00:05:28.106 --rc geninfo_unexecuted_blocks=1 00:05:28.106 00:05:28.106 ' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.106 --rc genhtml_branch_coverage=1 00:05:28.106 --rc genhtml_function_coverage=1 00:05:28.106 --rc genhtml_legend=1 00:05:28.106 --rc geninfo_all_blocks=1 00:05:28.106 --rc geninfo_unexecuted_blocks=1 00:05:28.106 00:05:28.106 ' 00:05:28.106 10:04:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57204 00:05:28.106 10:04:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.106 10:04:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57204 00:05:28.106 10:04:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 57204 ']' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.106 10:04:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.106 [2024-10-17 10:04:31.120722] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:28.106 [2024-10-17 10:04:31.121003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57204 ] 00:05:28.367 [2024-10-17 10:04:31.270549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.367 [2024-10-17 10:04:31.370929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.367 [2024-10-17 10:04:31.371120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57204' to capture a snapshot of events at runtime. 00:05:28.367 [2024-10-17 10:04:31.371189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.367 [2024-10-17 10:04:31.371223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.367 [2024-10-17 10:04:31.371242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57204 for offline analysis/debug. 00:05:28.367 [2024-10-17 10:04:31.372103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.936 10:04:31 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.936 10:04:31 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:28.936 10:04:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.936 10:04:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.936 10:04:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:28.936 10:04:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:28.936 10:04:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.936 10:04:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.936 10:04:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.936 ************************************ 00:05:28.936 START TEST rpc_integrity 00:05:28.936 ************************************ 00:05:28.936 10:04:31 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:28.936 10:04:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.936 10:04:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.936 10:04:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.936 10:04:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.936 10:04:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.936 10:04:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:28.936 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.195 { 00:05:29.195 "name": "Malloc0", 00:05:29.195 "aliases": [ 00:05:29.195 "ced4e8be-80aa-41f9-b618-dd88ba48c459" 00:05:29.195 ], 00:05:29.195 "product_name": "Malloc disk", 00:05:29.195 "block_size": 512, 00:05:29.195 "num_blocks": 16384, 00:05:29.195 "uuid": "ced4e8be-80aa-41f9-b618-dd88ba48c459", 00:05:29.195 "assigned_rate_limits": { 00:05:29.195 "rw_ios_per_sec": 0, 00:05:29.195 "rw_mbytes_per_sec": 0, 00:05:29.195 "r_mbytes_per_sec": 0, 00:05:29.195 "w_mbytes_per_sec": 0 00:05:29.195 }, 00:05:29.195 "claimed": false, 00:05:29.195 "zoned": false, 00:05:29.195 "supported_io_types": { 00:05:29.195 "read": true, 00:05:29.195 "write": true, 00:05:29.195 "unmap": true, 00:05:29.195 "flush": true, 00:05:29.195 "reset": true, 00:05:29.195 "nvme_admin": false, 00:05:29.195 "nvme_io": false, 00:05:29.195 "nvme_io_md": false, 00:05:29.195 "write_zeroes": true, 00:05:29.195 "zcopy": true, 00:05:29.195 "get_zone_info": false, 00:05:29.195 "zone_management": false, 00:05:29.195 "zone_append": false, 00:05:29.195 "compare": false, 00:05:29.195 "compare_and_write": false, 00:05:29.195 "abort": true, 00:05:29.195 "seek_hole": false, 00:05:29.195 "seek_data": false, 00:05:29.195 "copy": true, 00:05:29.195 "nvme_iov_md": false 00:05:29.195 }, 00:05:29.195 "memory_domains": [ 00:05:29.195 { 00:05:29.195 "dma_device_id": "system", 00:05:29.195 "dma_device_type": 1 00:05:29.195 }, 00:05:29.195 { 00:05:29.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.195 "dma_device_type": 2 00:05:29.195 } 00:05:29.195 ], 00:05:29.195 "driver_specific": {} 00:05:29.195 } 00:05:29.195 ]' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 [2024-10-17 10:04:32.097700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.195 [2024-10-17 10:04:32.097891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.195 [2024-10-17 10:04:32.097930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:29.195 [2024-10-17 10:04:32.097943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.195 [2024-10-17 10:04:32.100188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.195 [2024-10-17 10:04:32.100223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.195 Passthru0 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.195 { 00:05:29.195 "name": "Malloc0", 00:05:29.195 "aliases": [ 00:05:29.195 "ced4e8be-80aa-41f9-b618-dd88ba48c459" 00:05:29.195 ], 00:05:29.195 "product_name": "Malloc disk", 00:05:29.195 "block_size": 512, 00:05:29.195 "num_blocks": 16384, 00:05:29.195 "uuid": "ced4e8be-80aa-41f9-b618-dd88ba48c459", 00:05:29.195 "assigned_rate_limits": { 00:05:29.195 "rw_ios_per_sec": 0, 00:05:29.195 "rw_mbytes_per_sec": 0, 00:05:29.195 "r_mbytes_per_sec": 0, 00:05:29.195 "w_mbytes_per_sec": 0 00:05:29.195 }, 00:05:29.195 "claimed": true, 00:05:29.195 "claim_type": "exclusive_write", 00:05:29.195 "zoned": false, 00:05:29.195 "supported_io_types": { 00:05:29.195 "read": true, 00:05:29.195 "write": true, 00:05:29.195 "unmap": true, 00:05:29.195 "flush": true, 00:05:29.195 "reset": true, 00:05:29.195 "nvme_admin": false, 00:05:29.195 "nvme_io": false, 00:05:29.195 "nvme_io_md": false, 00:05:29.195 "write_zeroes": true, 00:05:29.195 "zcopy": true, 00:05:29.195 "get_zone_info": false, 00:05:29.195 "zone_management": false, 00:05:29.195 "zone_append": false, 00:05:29.195 "compare": false, 00:05:29.195 "compare_and_write": false, 00:05:29.195 "abort": true, 00:05:29.195 "seek_hole": false, 00:05:29.195 "seek_data": false, 00:05:29.195 "copy": true, 00:05:29.195 "nvme_iov_md": false 00:05:29.195 }, 00:05:29.195 "memory_domains": [ 00:05:29.195 { 00:05:29.195 "dma_device_id": "system", 00:05:29.195 "dma_device_type": 1 00:05:29.195 }, 00:05:29.195 { 00:05:29.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.195 "dma_device_type": 2 00:05:29.195 } 00:05:29.195 ], 00:05:29.195 "driver_specific": {} 00:05:29.195 }, 00:05:29.195 { 00:05:29.195 "name": "Passthru0", 00:05:29.195 "aliases": [ 00:05:29.195 "9d9f8b12-dcc6-5e0e-940d-035e43ba6c17" 00:05:29.195 ], 00:05:29.195 "product_name": "passthru", 00:05:29.195 "block_size": 512, 00:05:29.195 "num_blocks": 16384, 00:05:29.195 "uuid": "9d9f8b12-dcc6-5e0e-940d-035e43ba6c17", 00:05:29.195 "assigned_rate_limits": { 00:05:29.195 "rw_ios_per_sec": 0, 00:05:29.195 "rw_mbytes_per_sec": 0, 00:05:29.195 "r_mbytes_per_sec": 0, 00:05:29.195 "w_mbytes_per_sec": 0 00:05:29.195 }, 00:05:29.195 "claimed": false, 00:05:29.195 "zoned": false, 00:05:29.195 "supported_io_types": { 00:05:29.195 "read": true, 00:05:29.195 "write": true, 00:05:29.195 "unmap": true, 00:05:29.195 "flush": true, 00:05:29.195 "reset": true, 00:05:29.195 "nvme_admin": false, 00:05:29.195 "nvme_io": false, 00:05:29.195 "nvme_io_md": false, 00:05:29.195 "write_zeroes": true, 00:05:29.195 "zcopy": true, 00:05:29.195 "get_zone_info": false, 00:05:29.195 "zone_management": false, 00:05:29.195 "zone_append": false, 00:05:29.195 "compare": false, 00:05:29.195 "compare_and_write": false, 00:05:29.195 "abort": true, 00:05:29.195 "seek_hole": false, 00:05:29.195 "seek_data": false, 00:05:29.195 "copy": true, 00:05:29.195 "nvme_iov_md": false 00:05:29.195 }, 00:05:29.195 "memory_domains": [ 00:05:29.195 { 00:05:29.195 "dma_device_id": "system", 00:05:29.195 "dma_device_type": 1 00:05:29.195 }, 00:05:29.195 { 00:05:29.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.195 "dma_device_type": 2 00:05:29.195 } 00:05:29.195 ], 00:05:29.195 "driver_specific": { 00:05:29.195 "passthru": { 00:05:29.195 "name": "Passthru0", 00:05:29.195 "base_bdev_name": "Malloc0" 00:05:29.195 } 00:05:29.195 } 00:05:29.195 } 00:05:29.195 ]' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.195 ************************************ 00:05:29.195 END TEST rpc_integrity 00:05:29.195 ************************************ 00:05:29.195 10:04:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.195 00:05:29.195 real 0m0.243s 00:05:29.195 user 0m0.124s 00:05:29.195 sys 0m0.036s 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.195 10:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.195 10:04:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.195 10:04:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.196 10:04:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.196 10:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.196 ************************************ 00:05:29.196 START TEST rpc_plugins 00:05:29.196 ************************************ 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:29.196 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.196 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.196 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.196 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.454 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.454 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.454 { 00:05:29.454 "name": "Malloc1", 00:05:29.454 "aliases": [ 00:05:29.454 "72775062-0ab6-4e30-ad19-0376673b2ddd" 00:05:29.454 ], 00:05:29.454 "product_name": "Malloc disk", 00:05:29.454 "block_size": 4096, 00:05:29.454 "num_blocks": 256, 00:05:29.454 "uuid": "72775062-0ab6-4e30-ad19-0376673b2ddd", 00:05:29.454 "assigned_rate_limits": { 00:05:29.454 "rw_ios_per_sec": 0, 00:05:29.454 "rw_mbytes_per_sec": 0, 00:05:29.454 "r_mbytes_per_sec": 0, 00:05:29.454 "w_mbytes_per_sec": 0 00:05:29.454 }, 00:05:29.454 "claimed": false, 00:05:29.454 "zoned": false, 00:05:29.454 "supported_io_types": { 00:05:29.454 "read": true, 00:05:29.454 "write": true, 00:05:29.454 "unmap": true, 00:05:29.454 "flush": true, 00:05:29.454 "reset": true, 00:05:29.454 "nvme_admin": false, 00:05:29.454 "nvme_io": false, 00:05:29.454 "nvme_io_md": false, 00:05:29.454 "write_zeroes": true, 00:05:29.454 "zcopy": true, 00:05:29.454 "get_zone_info": false, 00:05:29.454 "zone_management": false, 00:05:29.454 "zone_append": false, 00:05:29.454 "compare": false, 00:05:29.454 "compare_and_write": false, 00:05:29.454 "abort": true, 00:05:29.454 "seek_hole": false, 00:05:29.454 "seek_data": false, 00:05:29.454 "copy": true, 00:05:29.454 "nvme_iov_md": false 00:05:29.454 }, 00:05:29.454 "memory_domains": [ 00:05:29.454 { 00:05:29.455 "dma_device_id": "system", 00:05:29.455 "dma_device_type": 1 00:05:29.455 }, 00:05:29.455 { 00:05:29.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.455 "dma_device_type": 2 00:05:29.455 } 00:05:29.455 ], 00:05:29.455 "driver_specific": {} 00:05:29.455 } 00:05:29.455 ]' 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.455 ************************************ 00:05:29.455 END TEST rpc_plugins 00:05:29.455 ************************************ 00:05:29.455 10:04:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.455 00:05:29.455 real 0m0.115s 00:05:29.455 user 0m0.067s 00:05:29.455 sys 0m0.013s 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.455 10:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.455 10:04:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.455 10:04:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.455 10:04:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.455 10:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.455 ************************************ 00:05:29.455 START TEST rpc_trace_cmd_test 00:05:29.455 ************************************ 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.455 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57204", 00:05:29.455 "tpoint_group_mask": "0x8", 00:05:29.455 "iscsi_conn": { 00:05:29.455 "mask": "0x2", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "scsi": { 00:05:29.455 "mask": "0x4", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "bdev": { 00:05:29.455 "mask": "0x8", 00:05:29.455 "tpoint_mask": "0xffffffffffffffff" 00:05:29.455 }, 00:05:29.455 "nvmf_rdma": { 00:05:29.455 "mask": "0x10", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "nvmf_tcp": { 00:05:29.455 "mask": "0x20", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "ftl": { 00:05:29.455 "mask": "0x40", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "blobfs": { 00:05:29.455 "mask": "0x80", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "dsa": { 00:05:29.455 "mask": "0x200", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "thread": { 00:05:29.455 "mask": "0x400", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "nvme_pcie": { 00:05:29.455 "mask": "0x800", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "iaa": { 00:05:29.455 "mask": "0x1000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "nvme_tcp": { 00:05:29.455 "mask": "0x2000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "bdev_nvme": { 00:05:29.455 "mask": "0x4000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "sock": { 00:05:29.455 "mask": "0x8000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "blob": { 00:05:29.455 "mask": "0x10000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "bdev_raid": { 00:05:29.455 "mask": "0x20000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 }, 00:05:29.455 "scheduler": { 00:05:29.455 "mask": "0x40000", 00:05:29.455 "tpoint_mask": "0x0" 00:05:29.455 } 00:05:29.455 }' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:29.455 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:29.713 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:29.713 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:29.713 ************************************ 00:05:29.713 END TEST rpc_trace_cmd_test 00:05:29.713 ************************************ 00:05:29.713 10:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:29.713 00:05:29.713 real 0m0.175s 00:05:29.713 user 0m0.135s 00:05:29.713 sys 0m0.027s 00:05:29.713 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.713 10:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.713 10:04:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:29.713 10:04:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:29.713 10:04:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:29.713 10:04:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.713 10:04:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.713 10:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.713 ************************************ 00:05:29.713 START TEST rpc_daemon_integrity 00:05:29.713 ************************************ 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.713 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.714 { 00:05:29.714 "name": "Malloc2", 00:05:29.714 "aliases": [ 00:05:29.714 "a7863cbf-8c02-40f0-a950-5ca974a48e59" 00:05:29.714 ], 00:05:29.714 "product_name": "Malloc disk", 00:05:29.714 "block_size": 512, 00:05:29.714 "num_blocks": 16384, 00:05:29.714 "uuid": "a7863cbf-8c02-40f0-a950-5ca974a48e59", 00:05:29.714 "assigned_rate_limits": { 00:05:29.714 "rw_ios_per_sec": 0, 00:05:29.714 "rw_mbytes_per_sec": 0, 00:05:29.714 "r_mbytes_per_sec": 0, 00:05:29.714 "w_mbytes_per_sec": 0 00:05:29.714 }, 00:05:29.714 "claimed": false, 00:05:29.714 "zoned": false, 00:05:29.714 "supported_io_types": { 00:05:29.714 "read": true, 00:05:29.714 "write": true, 00:05:29.714 "unmap": true, 00:05:29.714 "flush": true, 00:05:29.714 "reset": true, 00:05:29.714 "nvme_admin": false, 00:05:29.714 "nvme_io": false, 00:05:29.714 "nvme_io_md": false, 00:05:29.714 "write_zeroes": true, 00:05:29.714 "zcopy": true, 00:05:29.714 "get_zone_info": false, 00:05:29.714 "zone_management": false, 00:05:29.714 "zone_append": false, 00:05:29.714 "compare": false, 00:05:29.714 "compare_and_write": false, 00:05:29.714 "abort": true, 00:05:29.714 "seek_hole": false, 00:05:29.714 "seek_data": false, 00:05:29.714 "copy": true, 00:05:29.714 "nvme_iov_md": false 00:05:29.714 }, 00:05:29.714 "memory_domains": [ 00:05:29.714 { 00:05:29.714 "dma_device_id": "system", 00:05:29.714 "dma_device_type": 1 00:05:29.714 }, 00:05:29.714 { 00:05:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.714 "dma_device_type": 2 00:05:29.714 } 00:05:29.714 ], 00:05:29.714 "driver_specific": {} 00:05:29.714 } 00:05:29.714 ]' 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 [2024-10-17 10:04:32.737362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:29.714 [2024-10-17 10:04:32.737435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.714 [2024-10-17 10:04:32.737455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:29.714 [2024-10-17 10:04:32.737466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.714 [2024-10-17 10:04:32.739680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.714 [2024-10-17 10:04:32.739856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.714 Passthru0 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.714 { 00:05:29.714 "name": "Malloc2", 00:05:29.714 "aliases": [ 00:05:29.714 "a7863cbf-8c02-40f0-a950-5ca974a48e59" 00:05:29.714 ], 00:05:29.714 "product_name": "Malloc disk", 00:05:29.714 "block_size": 512, 00:05:29.714 "num_blocks": 16384, 00:05:29.714 "uuid": "a7863cbf-8c02-40f0-a950-5ca974a48e59", 00:05:29.714 "assigned_rate_limits": { 00:05:29.714 "rw_ios_per_sec": 0, 00:05:29.714 "rw_mbytes_per_sec": 0, 00:05:29.714 "r_mbytes_per_sec": 0, 00:05:29.714 "w_mbytes_per_sec": 0 00:05:29.714 }, 00:05:29.714 "claimed": true, 00:05:29.714 "claim_type": "exclusive_write", 00:05:29.714 "zoned": false, 00:05:29.714 "supported_io_types": { 00:05:29.714 "read": true, 00:05:29.714 "write": true, 00:05:29.714 "unmap": true, 00:05:29.714 "flush": true, 00:05:29.714 "reset": true, 00:05:29.714 "nvme_admin": false, 00:05:29.714 "nvme_io": false, 00:05:29.714 "nvme_io_md": false, 00:05:29.714 "write_zeroes": true, 00:05:29.714 "zcopy": true, 00:05:29.714 "get_zone_info": false, 00:05:29.714 "zone_management": false, 00:05:29.714 "zone_append": false, 00:05:29.714 "compare": false, 00:05:29.714 "compare_and_write": false, 00:05:29.714 "abort": true, 00:05:29.714 "seek_hole": false, 00:05:29.714 "seek_data": false, 00:05:29.714 "copy": true, 00:05:29.714 "nvme_iov_md": false 00:05:29.714 }, 00:05:29.714 "memory_domains": [ 00:05:29.714 { 00:05:29.714 "dma_device_id": "system", 00:05:29.714 "dma_device_type": 1 00:05:29.714 }, 00:05:29.714 { 00:05:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.714 "dma_device_type": 2 00:05:29.714 } 00:05:29.714 ], 00:05:29.714 "driver_specific": {} 00:05:29.714 }, 00:05:29.714 { 00:05:29.714 "name": "Passthru0", 00:05:29.714 "aliases": [ 00:05:29.714 "9871e534-6168-5af5-acf3-6d7db218189b" 00:05:29.714 ], 00:05:29.714 "product_name": "passthru", 00:05:29.714 "block_size": 512, 00:05:29.714 "num_blocks": 16384, 00:05:29.714 "uuid": "9871e534-6168-5af5-acf3-6d7db218189b", 00:05:29.714 "assigned_rate_limits": { 00:05:29.714 "rw_ios_per_sec": 0, 00:05:29.714 "rw_mbytes_per_sec": 0, 00:05:29.714 "r_mbytes_per_sec": 0, 00:05:29.714 "w_mbytes_per_sec": 0 00:05:29.714 }, 00:05:29.714 "claimed": false, 00:05:29.714 "zoned": false, 00:05:29.714 "supported_io_types": { 00:05:29.714 "read": true, 00:05:29.714 "write": true, 00:05:29.714 "unmap": true, 00:05:29.714 "flush": true, 00:05:29.714 "reset": true, 00:05:29.714 "nvme_admin": false, 00:05:29.714 "nvme_io": false, 00:05:29.714 "nvme_io_md": false, 00:05:29.714 "write_zeroes": true, 00:05:29.714 "zcopy": true, 00:05:29.714 "get_zone_info": false, 00:05:29.714 "zone_management": false, 00:05:29.714 "zone_append": false, 00:05:29.714 "compare": false, 00:05:29.714 "compare_and_write": false, 00:05:29.714 "abort": true, 00:05:29.714 "seek_hole": false, 00:05:29.714 "seek_data": false, 00:05:29.714 "copy": true, 00:05:29.714 "nvme_iov_md": false 00:05:29.714 }, 00:05:29.714 "memory_domains": [ 00:05:29.714 { 00:05:29.714 "dma_device_id": "system", 00:05:29.714 "dma_device_type": 1 00:05:29.714 }, 00:05:29.714 { 00:05:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.714 "dma_device_type": 2 00:05:29.714 } 00:05:29.714 ], 00:05:29.714 "driver_specific": { 00:05:29.714 "passthru": { 00:05:29.714 "name": "Passthru0", 00:05:29.714 "base_bdev_name": "Malloc2" 00:05:29.714 } 00:05:29.714 } 00:05:29.714 } 00:05:29.714 ]' 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.714 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.972 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.972 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.972 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.972 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.972 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.973 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.973 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.973 ************************************ 00:05:29.973 END TEST rpc_daemon_integrity 00:05:29.973 ************************************ 00:05:29.973 10:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.973 00:05:29.973 real 0m0.233s 00:05:29.973 user 0m0.122s 00:05:29.973 sys 0m0.033s 00:05:29.973 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.973 10:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.973 10:04:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:29.973 10:04:32 rpc -- rpc/rpc.sh@84 -- # killprocess 57204 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 57204 ']' 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@954 -- # kill -0 57204 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@955 -- # uname 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57204 00:05:29.973 killing process with pid 57204 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57204' 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@969 -- # kill 57204 00:05:29.973 10:04:32 rpc -- common/autotest_common.sh@974 -- # wait 57204 00:05:31.346 ************************************ 00:05:31.346 END TEST rpc 00:05:31.346 ************************************ 00:05:31.346 00:05:31.346 real 0m3.390s 00:05:31.346 user 0m3.779s 00:05:31.346 sys 0m0.625s 00:05:31.346 10:04:34 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.346 10:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.346 10:04:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.346 10:04:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.346 10:04:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.346 10:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:31.346 ************************************ 00:05:31.346 START TEST skip_rpc 00:05:31.346 ************************************ 00:05:31.346 10:04:34 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.346 * Looking for test storage... 00:05:31.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.346 10:04:34 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.346 10:04:34 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.346 10:04:34 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.604 10:04:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.604 --rc genhtml_branch_coverage=1 00:05:31.604 --rc genhtml_function_coverage=1 00:05:31.604 --rc genhtml_legend=1 00:05:31.604 --rc geninfo_all_blocks=1 00:05:31.604 --rc geninfo_unexecuted_blocks=1 00:05:31.604 00:05:31.604 ' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.604 --rc genhtml_branch_coverage=1 00:05:31.604 --rc genhtml_function_coverage=1 00:05:31.604 --rc genhtml_legend=1 00:05:31.604 --rc geninfo_all_blocks=1 00:05:31.604 --rc geninfo_unexecuted_blocks=1 00:05:31.604 00:05:31.604 ' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.604 --rc genhtml_branch_coverage=1 00:05:31.604 --rc genhtml_function_coverage=1 00:05:31.604 --rc genhtml_legend=1 00:05:31.604 --rc geninfo_all_blocks=1 00:05:31.604 --rc geninfo_unexecuted_blocks=1 00:05:31.604 00:05:31.604 ' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.604 --rc genhtml_branch_coverage=1 00:05:31.604 --rc genhtml_function_coverage=1 00:05:31.604 --rc genhtml_legend=1 00:05:31.604 --rc geninfo_all_blocks=1 00:05:31.604 --rc geninfo_unexecuted_blocks=1 00:05:31.604 00:05:31.604 ' 00:05:31.604 10:04:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.604 10:04:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.604 10:04:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.604 10:04:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.604 ************************************ 00:05:31.604 START TEST skip_rpc 00:05:31.604 ************************************ 00:05:31.604 10:04:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:31.604 10:04:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57416 00:05:31.604 10:04:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.604 10:04:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.604 10:04:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.604 [2024-10-17 10:04:34.542007] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:31.604 [2024-10-17 10:04:34.542339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57416 ] 00:05:31.605 [2024-10-17 10:04:34.693386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.863 [2024-10-17 10:04:34.793670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57416 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57416 ']' 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57416 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57416 00:05:37.118 killing process with pid 57416 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57416' 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57416 00:05:37.118 10:04:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57416 00:05:37.684 00:05:37.684 real 0m6.245s 00:05:37.684 user 0m5.883s 00:05:37.684 sys 0m0.256s 00:05:37.684 10:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.684 ************************************ 00:05:37.684 END TEST skip_rpc 00:05:37.684 ************************************ 00:05:37.684 10:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.684 10:04:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.684 10:04:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.684 10:04:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.684 10:04:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.684 ************************************ 00:05:37.684 START TEST skip_rpc_with_json 00:05:37.684 ************************************ 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57509 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57509 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57509 ']' 00:05:37.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.684 10:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 [2024-10-17 10:04:40.819733] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:37.943 [2024-10-17 10:04:40.819836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57509 ] 00:05:37.943 [2024-10-17 10:04:40.957075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.200 [2024-10-17 10:04:41.037366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 [2024-10-17 10:04:41.710071] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.766 request: 00:05:38.766 { 00:05:38.766 "trtype": "tcp", 00:05:38.766 "method": "nvmf_get_transports", 00:05:38.766 "req_id": 1 00:05:38.766 } 00:05:38.766 Got JSON-RPC error response 00:05:38.766 response: 00:05:38.766 { 00:05:38.766 "code": -19, 00:05:38.766 "message": "No such device" 00:05:38.766 } 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 [2024-10-17 10:04:41.718154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.766 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.024 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.024 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.024 { 00:05:39.024 "subsystems": [ 00:05:39.024 { 00:05:39.024 "subsystem": "fsdev", 00:05:39.024 "config": [ 00:05:39.024 { 00:05:39.024 "method": "fsdev_set_opts", 00:05:39.024 "params": { 00:05:39.024 "fsdev_io_pool_size": 65535, 00:05:39.024 "fsdev_io_cache_size": 256 00:05:39.024 } 00:05:39.024 } 00:05:39.024 ] 00:05:39.024 }, 00:05:39.024 { 00:05:39.024 "subsystem": "keyring", 00:05:39.024 "config": [] 00:05:39.024 }, 00:05:39.024 { 00:05:39.024 "subsystem": "iobuf", 00:05:39.024 "config": [ 00:05:39.024 { 00:05:39.024 "method": "iobuf_set_options", 00:05:39.024 "params": { 00:05:39.024 "small_pool_count": 8192, 00:05:39.024 "large_pool_count": 1024, 00:05:39.024 "small_bufsize": 8192, 00:05:39.024 "large_bufsize": 135168 00:05:39.024 } 00:05:39.024 } 00:05:39.024 ] 00:05:39.024 }, 00:05:39.024 { 00:05:39.024 "subsystem": "sock", 00:05:39.024 "config": [ 00:05:39.024 { 00:05:39.024 "method": "sock_set_default_impl", 00:05:39.024 "params": { 00:05:39.024 "impl_name": "posix" 00:05:39.024 } 00:05:39.024 }, 00:05:39.024 { 00:05:39.024 "method": "sock_impl_set_options", 00:05:39.024 "params": { 00:05:39.024 "impl_name": "ssl", 00:05:39.024 "recv_buf_size": 4096, 00:05:39.024 "send_buf_size": 4096, 00:05:39.024 "enable_recv_pipe": true, 00:05:39.024 "enable_quickack": false, 00:05:39.024 "enable_placement_id": 0, 00:05:39.024 "enable_zerocopy_send_server": true, 00:05:39.024 "enable_zerocopy_send_client": false, 00:05:39.024 "zerocopy_threshold": 0, 00:05:39.024 "tls_version": 0, 00:05:39.024 "enable_ktls": false 00:05:39.024 } 00:05:39.024 }, 00:05:39.024 { 00:05:39.024 "method": "sock_impl_set_options", 00:05:39.024 "params": { 00:05:39.024 "impl_name": "posix", 00:05:39.024 "recv_buf_size": 2097152, 00:05:39.024 "send_buf_size": 2097152, 00:05:39.024 "enable_recv_pipe": true, 00:05:39.024 "enable_quickack": false, 00:05:39.024 "enable_placement_id": 0, 00:05:39.024 "enable_zerocopy_send_server": true, 00:05:39.024 "enable_zerocopy_send_client": false, 00:05:39.024 "zerocopy_threshold": 0, 00:05:39.024 "tls_version": 0, 00:05:39.025 "enable_ktls": false 00:05:39.025 } 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "vmd", 00:05:39.025 "config": [] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "accel", 00:05:39.025 "config": [ 00:05:39.025 { 00:05:39.025 "method": "accel_set_options", 00:05:39.025 "params": { 00:05:39.025 "small_cache_size": 128, 00:05:39.025 "large_cache_size": 16, 00:05:39.025 "task_count": 2048, 00:05:39.025 "sequence_count": 2048, 00:05:39.025 "buf_count": 2048 00:05:39.025 } 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "bdev", 00:05:39.025 "config": [ 00:05:39.025 { 00:05:39.025 "method": "bdev_set_options", 00:05:39.025 "params": { 00:05:39.025 "bdev_io_pool_size": 65535, 00:05:39.025 "bdev_io_cache_size": 256, 00:05:39.025 "bdev_auto_examine": true, 00:05:39.025 "iobuf_small_cache_size": 128, 00:05:39.025 "iobuf_large_cache_size": 16 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "bdev_raid_set_options", 00:05:39.025 "params": { 00:05:39.025 "process_window_size_kb": 1024, 00:05:39.025 "process_max_bandwidth_mb_sec": 0 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "bdev_iscsi_set_options", 00:05:39.025 "params": { 00:05:39.025 "timeout_sec": 30 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "bdev_nvme_set_options", 00:05:39.025 "params": { 00:05:39.025 "action_on_timeout": "none", 00:05:39.025 "timeout_us": 0, 00:05:39.025 "timeout_admin_us": 0, 00:05:39.025 "keep_alive_timeout_ms": 10000, 00:05:39.025 "arbitration_burst": 0, 00:05:39.025 "low_priority_weight": 0, 00:05:39.025 "medium_priority_weight": 0, 00:05:39.025 "high_priority_weight": 0, 00:05:39.025 "nvme_adminq_poll_period_us": 10000, 00:05:39.025 "nvme_ioq_poll_period_us": 0, 00:05:39.025 "io_queue_requests": 0, 00:05:39.025 "delay_cmd_submit": true, 00:05:39.025 "transport_retry_count": 4, 00:05:39.025 "bdev_retry_count": 3, 00:05:39.025 "transport_ack_timeout": 0, 00:05:39.025 "ctrlr_loss_timeout_sec": 0, 00:05:39.025 "reconnect_delay_sec": 0, 00:05:39.025 "fast_io_fail_timeout_sec": 0, 00:05:39.025 "disable_auto_failback": false, 00:05:39.025 "generate_uuids": false, 00:05:39.025 "transport_tos": 0, 00:05:39.025 "nvme_error_stat": false, 00:05:39.025 "rdma_srq_size": 0, 00:05:39.025 "io_path_stat": false, 00:05:39.025 "allow_accel_sequence": false, 00:05:39.025 "rdma_max_cq_size": 0, 00:05:39.025 "rdma_cm_event_timeout_ms": 0, 00:05:39.025 "dhchap_digests": [ 00:05:39.025 "sha256", 00:05:39.025 "sha384", 00:05:39.025 "sha512" 00:05:39.025 ], 00:05:39.025 "dhchap_dhgroups": [ 00:05:39.025 "null", 00:05:39.025 "ffdhe2048", 00:05:39.025 "ffdhe3072", 00:05:39.025 "ffdhe4096", 00:05:39.025 "ffdhe6144", 00:05:39.025 "ffdhe8192" 00:05:39.025 ] 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "bdev_nvme_set_hotplug", 00:05:39.025 "params": { 00:05:39.025 "period_us": 100000, 00:05:39.025 "enable": false 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "bdev_wait_for_examine" 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "scsi", 00:05:39.025 "config": null 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "scheduler", 00:05:39.025 "config": [ 00:05:39.025 { 00:05:39.025 "method": "framework_set_scheduler", 00:05:39.025 "params": { 00:05:39.025 "name": "static" 00:05:39.025 } 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "vhost_scsi", 00:05:39.025 "config": [] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "vhost_blk", 00:05:39.025 "config": [] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "ublk", 00:05:39.025 "config": [] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "nbd", 00:05:39.025 "config": [] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "nvmf", 00:05:39.025 "config": [ 00:05:39.025 { 00:05:39.025 "method": "nvmf_set_config", 00:05:39.025 "params": { 00:05:39.025 "discovery_filter": "match_any", 00:05:39.025 "admin_cmd_passthru": { 00:05:39.025 "identify_ctrlr": false 00:05:39.025 }, 00:05:39.025 "dhchap_digests": [ 00:05:39.025 "sha256", 00:05:39.025 "sha384", 00:05:39.025 "sha512" 00:05:39.025 ], 00:05:39.025 "dhchap_dhgroups": [ 00:05:39.025 "null", 00:05:39.025 "ffdhe2048", 00:05:39.025 "ffdhe3072", 00:05:39.025 "ffdhe4096", 00:05:39.025 "ffdhe6144", 00:05:39.025 "ffdhe8192" 00:05:39.025 ] 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "nvmf_set_max_subsystems", 00:05:39.025 "params": { 00:05:39.025 "max_subsystems": 1024 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "nvmf_set_crdt", 00:05:39.025 "params": { 00:05:39.025 "crdt1": 0, 00:05:39.025 "crdt2": 0, 00:05:39.025 "crdt3": 0 00:05:39.025 } 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "method": "nvmf_create_transport", 00:05:39.025 "params": { 00:05:39.025 "trtype": "TCP", 00:05:39.025 "max_queue_depth": 128, 00:05:39.025 "max_io_qpairs_per_ctrlr": 127, 00:05:39.025 "in_capsule_data_size": 4096, 00:05:39.025 "max_io_size": 131072, 00:05:39.025 "io_unit_size": 131072, 00:05:39.025 "max_aq_depth": 128, 00:05:39.025 "num_shared_buffers": 511, 00:05:39.025 "buf_cache_size": 4294967295, 00:05:39.025 "dif_insert_or_strip": false, 00:05:39.025 "zcopy": false, 00:05:39.025 "c2h_success": true, 00:05:39.025 "sock_priority": 0, 00:05:39.025 "abort_timeout_sec": 1, 00:05:39.025 "ack_timeout": 0, 00:05:39.025 "data_wr_pool_size": 0 00:05:39.025 } 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 }, 00:05:39.025 { 00:05:39.025 "subsystem": "iscsi", 00:05:39.025 "config": [ 00:05:39.025 { 00:05:39.025 "method": "iscsi_set_options", 00:05:39.025 "params": { 00:05:39.025 "node_base": "iqn.2016-06.io.spdk", 00:05:39.025 "max_sessions": 128, 00:05:39.025 "max_connections_per_session": 2, 00:05:39.025 "max_queue_depth": 64, 00:05:39.025 "default_time2wait": 2, 00:05:39.025 "default_time2retain": 20, 00:05:39.025 "first_burst_length": 8192, 00:05:39.025 "immediate_data": true, 00:05:39.025 "allow_duplicated_isid": false, 00:05:39.025 "error_recovery_level": 0, 00:05:39.025 "nop_timeout": 60, 00:05:39.025 "nop_in_interval": 30, 00:05:39.025 "disable_chap": false, 00:05:39.025 "require_chap": false, 00:05:39.025 "mutual_chap": false, 00:05:39.025 "chap_group": 0, 00:05:39.025 "max_large_datain_per_connection": 64, 00:05:39.025 "max_r2t_per_connection": 4, 00:05:39.025 "pdu_pool_size": 36864, 00:05:39.025 "immediate_data_pool_size": 16384, 00:05:39.025 "data_out_pool_size": 2048 00:05:39.025 } 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 } 00:05:39.025 ] 00:05:39.025 } 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57509 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57509 ']' 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57509 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57509 00:05:39.025 killing process with pid 57509 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57509' 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57509 00:05:39.025 10:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57509 00:05:40.426 10:04:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57543 00:05:40.426 10:04:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:40.426 10:04:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57543 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57543 ']' 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57543 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57543 00:05:45.691 killing process with pid 57543 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57543' 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57543 00:05:45.691 10:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57543 00:05:46.257 10:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.257 10:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.257 ************************************ 00:05:46.257 END TEST skip_rpc_with_json 00:05:46.257 ************************************ 00:05:46.257 00:05:46.257 real 0m8.556s 00:05:46.257 user 0m8.248s 00:05:46.257 sys 0m0.581s 00:05:46.257 10:04:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.257 10:04:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.257 10:04:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.257 10:04:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.257 10:04:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.257 10:04:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.515 ************************************ 00:05:46.515 START TEST skip_rpc_with_delay 00:05:46.515 ************************************ 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.515 [2024-10-17 10:04:49.433476] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.515 00:05:46.515 real 0m0.132s 00:05:46.515 user 0m0.066s 00:05:46.515 sys 0m0.064s 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.515 10:04:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.515 ************************************ 00:05:46.515 END TEST skip_rpc_with_delay 00:05:46.515 ************************************ 00:05:46.515 10:04:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.515 10:04:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.515 10:04:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.515 10:04:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.515 10:04:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.515 10:04:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.515 ************************************ 00:05:46.515 START TEST exit_on_failed_rpc_init 00:05:46.515 ************************************ 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57666 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57666 00:05:46.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57666 ']' 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.515 10:04:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.773 [2024-10-17 10:04:49.610888] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:46.773 [2024-10-17 10:04:49.611031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57666 ] 00:05:46.773 [2024-10-17 10:04:49.761325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.773 [2024-10-17 10:04:49.844981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:47.707 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.707 [2024-10-17 10:04:50.514805] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:47.707 [2024-10-17 10:04:50.514928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57684 ] 00:05:47.707 [2024-10-17 10:04:50.666255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.707 [2024-10-17 10:04:50.781487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.707 [2024-10-17 10:04:50.781588] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.707 [2024-10-17 10:04:50.781602] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.707 [2024-10-17 10:04:50.781616] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57666 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57666 ']' 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57666 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57666 00:05:47.966 killing process with pid 57666 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.966 10:04:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.966 10:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57666' 00:05:47.966 10:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57666 00:05:47.966 10:04:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57666 00:05:49.339 00:05:49.339 real 0m2.655s 00:05:49.339 user 0m2.951s 00:05:49.339 sys 0m0.437s 00:05:49.339 ************************************ 00:05:49.339 END TEST exit_on_failed_rpc_init 00:05:49.339 ************************************ 00:05:49.339 10:04:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.339 10:04:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 10:04:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.339 00:05:49.339 real 0m17.919s 00:05:49.339 user 0m17.290s 00:05:49.339 sys 0m1.518s 00:05:49.339 10:04:52 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.339 10:04:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 ************************************ 00:05:49.339 END TEST skip_rpc 00:05:49.339 ************************************ 00:05:49.339 10:04:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.339 10:04:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.339 10:04:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.339 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.339 ************************************ 00:05:49.339 START TEST rpc_client 00:05:49.339 ************************************ 00:05:49.339 10:04:52 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.339 * Looking for test storage... 00:05:49.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:49.339 10:04:52 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.339 10:04:52 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.339 10:04:52 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.339 10:04:52 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:49.339 10:04:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.340 10:04:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.340 10:04:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.340 10:04:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:49.340 10:04:52 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.340 10:04:52 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.340 --rc genhtml_branch_coverage=1 00:05:49.340 --rc genhtml_function_coverage=1 00:05:49.340 --rc genhtml_legend=1 00:05:49.340 --rc geninfo_all_blocks=1 00:05:49.340 --rc geninfo_unexecuted_blocks=1 00:05:49.340 00:05:49.340 ' 00:05:49.340 10:04:52 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.340 --rc genhtml_branch_coverage=1 00:05:49.340 --rc genhtml_function_coverage=1 00:05:49.340 --rc genhtml_legend=1 00:05:49.340 --rc geninfo_all_blocks=1 00:05:49.340 --rc geninfo_unexecuted_blocks=1 00:05:49.340 00:05:49.340 ' 00:05:49.340 10:04:52 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.340 --rc genhtml_branch_coverage=1 00:05:49.340 --rc genhtml_function_coverage=1 00:05:49.340 --rc genhtml_legend=1 00:05:49.340 --rc geninfo_all_blocks=1 00:05:49.340 --rc geninfo_unexecuted_blocks=1 00:05:49.340 00:05:49.340 ' 00:05:49.340 10:04:52 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.340 --rc genhtml_branch_coverage=1 00:05:49.340 --rc genhtml_function_coverage=1 00:05:49.340 --rc genhtml_legend=1 00:05:49.340 --rc geninfo_all_blocks=1 00:05:49.340 --rc geninfo_unexecuted_blocks=1 00:05:49.340 00:05:49.340 ' 00:05:49.340 10:04:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:49.598 OK 00:05:49.598 10:04:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.598 ************************************ 00:05:49.598 END TEST rpc_client 00:05:49.598 ************************************ 00:05:49.598 00:05:49.598 real 0m0.186s 00:05:49.598 user 0m0.100s 00:05:49.598 sys 0m0.088s 00:05:49.598 10:04:52 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.598 10:04:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.598 10:04:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.598 10:04:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.598 10:04:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.598 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.598 ************************************ 00:05:49.598 START TEST json_config 00:05:49.598 ************************************ 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.598 10:04:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.598 10:04:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.598 10:04:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.598 10:04:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.598 10:04:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.598 10:04:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:49.598 10:04:52 json_config -- scripts/common.sh@345 -- # : 1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.598 10:04:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.598 10:04:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@353 -- # local d=1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.598 10:04:52 json_config -- scripts/common.sh@355 -- # echo 1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.598 10:04:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@353 -- # local d=2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.598 10:04:52 json_config -- scripts/common.sh@355 -- # echo 2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.598 10:04:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.598 10:04:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.598 10:04:52 json_config -- scripts/common.sh@368 -- # return 0 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.598 --rc genhtml_branch_coverage=1 00:05:49.598 --rc genhtml_function_coverage=1 00:05:49.598 --rc genhtml_legend=1 00:05:49.598 --rc geninfo_all_blocks=1 00:05:49.598 --rc geninfo_unexecuted_blocks=1 00:05:49.598 00:05:49.598 ' 00:05:49.598 10:04:52 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.598 --rc genhtml_branch_coverage=1 00:05:49.598 --rc genhtml_function_coverage=1 00:05:49.598 --rc genhtml_legend=1 00:05:49.598 --rc geninfo_all_blocks=1 00:05:49.598 --rc geninfo_unexecuted_blocks=1 00:05:49.599 00:05:49.599 ' 00:05:49.599 10:04:52 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.599 --rc genhtml_branch_coverage=1 00:05:49.599 --rc genhtml_function_coverage=1 00:05:49.599 --rc genhtml_legend=1 00:05:49.599 --rc geninfo_all_blocks=1 00:05:49.599 --rc geninfo_unexecuted_blocks=1 00:05:49.599 00:05:49.599 ' 00:05:49.599 10:04:52 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.599 --rc genhtml_branch_coverage=1 00:05:49.599 --rc genhtml_function_coverage=1 00:05:49.599 --rc genhtml_legend=1 00:05:49.599 --rc geninfo_all_blocks=1 00:05:49.599 --rc geninfo_unexecuted_blocks=1 00:05:49.599 00:05:49.599 ' 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f468df3-627b-414d-ac31-aa66f29c0fd5 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8f468df3-627b-414d-ac31-aa66f29c0fd5 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.599 10:04:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.599 10:04:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.599 10:04:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.599 10:04:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.599 10:04:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:04:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:04:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:04:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.599 10:04:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@51 -- # : 0 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.599 10:04:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.599 WARNING: No tests are enabled so not running JSON configuration tests 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:49.599 10:04:52 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:49.599 00:05:49.599 real 0m0.128s 00:05:49.599 user 0m0.068s 00:05:49.599 sys 0m0.062s 00:05:49.599 10:04:52 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.599 10:04:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.599 ************************************ 00:05:49.599 END TEST json_config 00:05:49.599 ************************************ 00:05:49.599 10:04:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.599 10:04:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.599 10:04:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.599 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.599 ************************************ 00:05:49.599 START TEST json_config_extra_key 00:05:49.599 ************************************ 00:05:49.599 10:04:52 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.857 10:04:52 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.857 10:04:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.857 10:04:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.857 10:04:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.857 10:04:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:49.857 10:04:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.858 --rc genhtml_branch_coverage=1 00:05:49.858 --rc genhtml_function_coverage=1 00:05:49.858 --rc genhtml_legend=1 00:05:49.858 --rc geninfo_all_blocks=1 00:05:49.858 --rc geninfo_unexecuted_blocks=1 00:05:49.858 00:05:49.858 ' 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.858 --rc genhtml_branch_coverage=1 00:05:49.858 --rc genhtml_function_coverage=1 00:05:49.858 --rc genhtml_legend=1 00:05:49.858 --rc geninfo_all_blocks=1 00:05:49.858 --rc geninfo_unexecuted_blocks=1 00:05:49.858 00:05:49.858 ' 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.858 --rc genhtml_branch_coverage=1 00:05:49.858 --rc genhtml_function_coverage=1 00:05:49.858 --rc genhtml_legend=1 00:05:49.858 --rc geninfo_all_blocks=1 00:05:49.858 --rc geninfo_unexecuted_blocks=1 00:05:49.858 00:05:49.858 ' 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.858 --rc genhtml_branch_coverage=1 00:05:49.858 --rc genhtml_function_coverage=1 00:05:49.858 --rc genhtml_legend=1 00:05:49.858 --rc geninfo_all_blocks=1 00:05:49.858 --rc geninfo_unexecuted_blocks=1 00:05:49.858 00:05:49.858 ' 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f468df3-627b-414d-ac31-aa66f29c0fd5 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8f468df3-627b-414d-ac31-aa66f29c0fd5 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.858 10:04:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.858 10:04:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.858 10:04:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.858 10:04:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.858 10:04:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.858 10:04:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.858 10:04:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.858 10:04:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.858 10:04:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.858 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.858 10:04:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.858 INFO: launching applications... 00:05:49.858 10:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57872 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.858 Waiting for target to run... 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57872 /var/tmp/spdk_tgt.sock 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57872 ']' 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.858 10:04:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.858 10:04:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 [2024-10-17 10:04:52.904312] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:49.858 [2024-10-17 10:04:52.904602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57872 ] 00:05:50.445 [2024-10-17 10:04:53.268275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.445 [2024-10-17 10:04:53.345734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.704 10:04:53 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.704 00:05:50.704 INFO: shutting down applications... 00:05:50.704 10:04:53 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.704 10:04:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.704 10:04:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57872 ]] 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57872 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57872 00:05:50.704 10:04:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.269 10:04:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.269 10:04:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.269 10:04:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57872 00:05:51.269 10:04:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.835 10:04:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.835 10:04:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.835 10:04:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57872 00:05:51.835 10:04:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57872 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.401 SPDK target shutdown done 00:05:52.401 Success 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.401 10:04:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.401 10:04:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:52.401 00:05:52.401 real 0m2.590s 00:05:52.401 user 0m2.310s 00:05:52.401 sys 0m0.441s 00:05:52.401 ************************************ 00:05:52.401 END TEST json_config_extra_key 00:05:52.401 ************************************ 00:05:52.401 10:04:55 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.401 10:04:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.401 10:04:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.401 10:04:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.401 10:04:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.401 10:04:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.401 ************************************ 00:05:52.401 START TEST alias_rpc 00:05:52.401 ************************************ 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.402 * Looking for test storage... 00:05:52.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.402 10:04:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.402 --rc genhtml_branch_coverage=1 00:05:52.402 --rc genhtml_function_coverage=1 00:05:52.402 --rc genhtml_legend=1 00:05:52.402 --rc geninfo_all_blocks=1 00:05:52.402 --rc geninfo_unexecuted_blocks=1 00:05:52.402 00:05:52.402 ' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.402 --rc genhtml_branch_coverage=1 00:05:52.402 --rc genhtml_function_coverage=1 00:05:52.402 --rc genhtml_legend=1 00:05:52.402 --rc geninfo_all_blocks=1 00:05:52.402 --rc geninfo_unexecuted_blocks=1 00:05:52.402 00:05:52.402 ' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.402 --rc genhtml_branch_coverage=1 00:05:52.402 --rc genhtml_function_coverage=1 00:05:52.402 --rc genhtml_legend=1 00:05:52.402 --rc geninfo_all_blocks=1 00:05:52.402 --rc geninfo_unexecuted_blocks=1 00:05:52.402 00:05:52.402 ' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.402 --rc genhtml_branch_coverage=1 00:05:52.402 --rc genhtml_function_coverage=1 00:05:52.402 --rc genhtml_legend=1 00:05:52.402 --rc geninfo_all_blocks=1 00:05:52.402 --rc geninfo_unexecuted_blocks=1 00:05:52.402 00:05:52.402 ' 00:05:52.402 10:04:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.402 10:04:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57964 00:05:52.402 10:04:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57964 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57964 ']' 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.402 10:04:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.402 10:04:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.660 [2024-10-17 10:04:55.519851] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:52.660 [2024-10-17 10:04:55.520003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57964 ] 00:05:52.660 [2024-10-17 10:04:55.669115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.918 [2024-10-17 10:04:55.753330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.483 10:04:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:53.483 10:04:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57964 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57964 ']' 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57964 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57964 00:05:53.483 killing process with pid 57964 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57964' 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@969 -- # kill 57964 00:05:53.483 10:04:56 alias_rpc -- common/autotest_common.sh@974 -- # wait 57964 00:05:54.856 ************************************ 00:05:54.856 END TEST alias_rpc 00:05:54.856 ************************************ 00:05:54.856 00:05:54.856 real 0m2.460s 00:05:54.856 user 0m2.545s 00:05:54.856 sys 0m0.388s 00:05:54.856 10:04:57 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.856 10:04:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.856 10:04:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:54.856 10:04:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.856 10:04:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.856 10:04:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.856 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.856 ************************************ 00:05:54.856 START TEST spdkcli_tcp 00:05:54.857 ************************************ 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.857 * Looking for test storage... 00:05:54.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.857 10:04:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.857 --rc genhtml_branch_coverage=1 00:05:54.857 --rc genhtml_function_coverage=1 00:05:54.857 --rc genhtml_legend=1 00:05:54.857 --rc geninfo_all_blocks=1 00:05:54.857 --rc geninfo_unexecuted_blocks=1 00:05:54.857 00:05:54.857 ' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.857 --rc genhtml_branch_coverage=1 00:05:54.857 --rc genhtml_function_coverage=1 00:05:54.857 --rc genhtml_legend=1 00:05:54.857 --rc geninfo_all_blocks=1 00:05:54.857 --rc geninfo_unexecuted_blocks=1 00:05:54.857 00:05:54.857 ' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.857 --rc genhtml_branch_coverage=1 00:05:54.857 --rc genhtml_function_coverage=1 00:05:54.857 --rc genhtml_legend=1 00:05:54.857 --rc geninfo_all_blocks=1 00:05:54.857 --rc geninfo_unexecuted_blocks=1 00:05:54.857 00:05:54.857 ' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.857 --rc genhtml_branch_coverage=1 00:05:54.857 --rc genhtml_function_coverage=1 00:05:54.857 --rc genhtml_legend=1 00:05:54.857 --rc geninfo_all_blocks=1 00:05:54.857 --rc geninfo_unexecuted_blocks=1 00:05:54.857 00:05:54.857 ' 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58049 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58049 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58049 ']' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.857 10:04:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.857 10:04:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.115 [2024-10-17 10:04:58.024244] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:55.115 [2024-10-17 10:04:58.024593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58049 ] 00:05:55.115 [2024-10-17 10:04:58.172257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.373 [2024-10-17 10:04:58.257058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.373 [2024-10-17 10:04:58.257111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.940 10:04:58 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.940 10:04:58 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:55.940 10:04:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58066 00:05:55.940 10:04:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.940 10:04:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.940 [ 00:05:55.940 "bdev_malloc_delete", 00:05:55.940 "bdev_malloc_create", 00:05:55.940 "bdev_null_resize", 00:05:55.940 "bdev_null_delete", 00:05:55.940 "bdev_null_create", 00:05:55.940 "bdev_nvme_cuse_unregister", 00:05:55.940 "bdev_nvme_cuse_register", 00:05:55.940 "bdev_opal_new_user", 00:05:55.940 "bdev_opal_set_lock_state", 00:05:55.940 "bdev_opal_delete", 00:05:55.940 "bdev_opal_get_info", 00:05:55.940 "bdev_opal_create", 00:05:55.940 "bdev_nvme_opal_revert", 00:05:55.940 "bdev_nvme_opal_init", 00:05:55.940 "bdev_nvme_send_cmd", 00:05:55.940 "bdev_nvme_set_keys", 00:05:55.940 "bdev_nvme_get_path_iostat", 00:05:55.940 "bdev_nvme_get_mdns_discovery_info", 00:05:55.940 "bdev_nvme_stop_mdns_discovery", 00:05:55.940 "bdev_nvme_start_mdns_discovery", 00:05:55.940 "bdev_nvme_set_multipath_policy", 00:05:55.940 "bdev_nvme_set_preferred_path", 00:05:55.940 "bdev_nvme_get_io_paths", 00:05:55.940 "bdev_nvme_remove_error_injection", 00:05:55.940 "bdev_nvme_add_error_injection", 00:05:55.940 "bdev_nvme_get_discovery_info", 00:05:55.940 "bdev_nvme_stop_discovery", 00:05:55.940 "bdev_nvme_start_discovery", 00:05:55.940 "bdev_nvme_get_controller_health_info", 00:05:55.940 "bdev_nvme_disable_controller", 00:05:55.940 "bdev_nvme_enable_controller", 00:05:55.940 "bdev_nvme_reset_controller", 00:05:55.940 "bdev_nvme_get_transport_statistics", 00:05:55.940 "bdev_nvme_apply_firmware", 00:05:55.940 "bdev_nvme_detach_controller", 00:05:55.940 "bdev_nvme_get_controllers", 00:05:55.940 "bdev_nvme_attach_controller", 00:05:55.940 "bdev_nvme_set_hotplug", 00:05:55.940 "bdev_nvme_set_options", 00:05:55.940 "bdev_passthru_delete", 00:05:55.940 "bdev_passthru_create", 00:05:55.940 "bdev_lvol_set_parent_bdev", 00:05:55.940 "bdev_lvol_set_parent", 00:05:55.940 "bdev_lvol_check_shallow_copy", 00:05:55.940 "bdev_lvol_start_shallow_copy", 00:05:55.940 "bdev_lvol_grow_lvstore", 00:05:55.940 "bdev_lvol_get_lvols", 00:05:55.940 "bdev_lvol_get_lvstores", 00:05:55.940 "bdev_lvol_delete", 00:05:55.940 "bdev_lvol_set_read_only", 00:05:55.940 "bdev_lvol_resize", 00:05:55.940 "bdev_lvol_decouple_parent", 00:05:55.940 "bdev_lvol_inflate", 00:05:55.940 "bdev_lvol_rename", 00:05:55.940 "bdev_lvol_clone_bdev", 00:05:55.940 "bdev_lvol_clone", 00:05:55.940 "bdev_lvol_snapshot", 00:05:55.940 "bdev_lvol_create", 00:05:55.940 "bdev_lvol_delete_lvstore", 00:05:55.940 "bdev_lvol_rename_lvstore", 00:05:55.940 "bdev_lvol_create_lvstore", 00:05:55.940 "bdev_raid_set_options", 00:05:55.940 "bdev_raid_remove_base_bdev", 00:05:55.940 "bdev_raid_add_base_bdev", 00:05:55.940 "bdev_raid_delete", 00:05:55.940 "bdev_raid_create", 00:05:55.940 "bdev_raid_get_bdevs", 00:05:55.940 "bdev_error_inject_error", 00:05:55.940 "bdev_error_delete", 00:05:55.940 "bdev_error_create", 00:05:55.940 "bdev_split_delete", 00:05:55.940 "bdev_split_create", 00:05:55.940 "bdev_delay_delete", 00:05:55.940 "bdev_delay_create", 00:05:55.940 "bdev_delay_update_latency", 00:05:55.940 "bdev_zone_block_delete", 00:05:55.940 "bdev_zone_block_create", 00:05:55.940 "blobfs_create", 00:05:55.940 "blobfs_detect", 00:05:55.940 "blobfs_set_cache_size", 00:05:55.940 "bdev_xnvme_delete", 00:05:55.940 "bdev_xnvme_create", 00:05:55.940 "bdev_aio_delete", 00:05:55.940 "bdev_aio_rescan", 00:05:55.940 "bdev_aio_create", 00:05:55.940 "bdev_ftl_set_property", 00:05:55.940 "bdev_ftl_get_properties", 00:05:55.940 "bdev_ftl_get_stats", 00:05:55.940 "bdev_ftl_unmap", 00:05:55.940 "bdev_ftl_unload", 00:05:55.940 "bdev_ftl_delete", 00:05:55.940 "bdev_ftl_load", 00:05:55.940 "bdev_ftl_create", 00:05:55.940 "bdev_virtio_attach_controller", 00:05:55.940 "bdev_virtio_scsi_get_devices", 00:05:55.940 "bdev_virtio_detach_controller", 00:05:55.940 "bdev_virtio_blk_set_hotplug", 00:05:55.940 "bdev_iscsi_delete", 00:05:55.940 "bdev_iscsi_create", 00:05:55.940 "bdev_iscsi_set_options", 00:05:55.940 "accel_error_inject_error", 00:05:55.940 "ioat_scan_accel_module", 00:05:55.940 "dsa_scan_accel_module", 00:05:55.940 "iaa_scan_accel_module", 00:05:55.940 "keyring_file_remove_key", 00:05:55.940 "keyring_file_add_key", 00:05:55.940 "keyring_linux_set_options", 00:05:55.940 "fsdev_aio_delete", 00:05:55.940 "fsdev_aio_create", 00:05:55.940 "iscsi_get_histogram", 00:05:55.940 "iscsi_enable_histogram", 00:05:55.940 "iscsi_set_options", 00:05:55.940 "iscsi_get_auth_groups", 00:05:55.940 "iscsi_auth_group_remove_secret", 00:05:55.940 "iscsi_auth_group_add_secret", 00:05:55.940 "iscsi_delete_auth_group", 00:05:55.940 "iscsi_create_auth_group", 00:05:55.940 "iscsi_set_discovery_auth", 00:05:55.940 "iscsi_get_options", 00:05:55.940 "iscsi_target_node_request_logout", 00:05:55.940 "iscsi_target_node_set_redirect", 00:05:55.941 "iscsi_target_node_set_auth", 00:05:55.941 "iscsi_target_node_add_lun", 00:05:55.941 "iscsi_get_stats", 00:05:55.941 "iscsi_get_connections", 00:05:55.941 "iscsi_portal_group_set_auth", 00:05:55.941 "iscsi_start_portal_group", 00:05:55.941 "iscsi_delete_portal_group", 00:05:55.941 "iscsi_create_portal_group", 00:05:55.941 "iscsi_get_portal_groups", 00:05:55.941 "iscsi_delete_target_node", 00:05:55.941 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.941 "iscsi_target_node_add_pg_ig_maps", 00:05:55.941 "iscsi_create_target_node", 00:05:55.941 "iscsi_get_target_nodes", 00:05:55.941 "iscsi_delete_initiator_group", 00:05:55.941 "iscsi_initiator_group_remove_initiators", 00:05:55.941 "iscsi_initiator_group_add_initiators", 00:05:55.941 "iscsi_create_initiator_group", 00:05:55.941 "iscsi_get_initiator_groups", 00:05:55.941 "nvmf_set_crdt", 00:05:55.941 "nvmf_set_config", 00:05:55.941 "nvmf_set_max_subsystems", 00:05:55.941 "nvmf_stop_mdns_prr", 00:05:55.941 "nvmf_publish_mdns_prr", 00:05:55.941 "nvmf_subsystem_get_listeners", 00:05:55.941 "nvmf_subsystem_get_qpairs", 00:05:55.941 "nvmf_subsystem_get_controllers", 00:05:55.941 "nvmf_get_stats", 00:05:55.941 "nvmf_get_transports", 00:05:55.941 "nvmf_create_transport", 00:05:55.941 "nvmf_get_targets", 00:05:55.941 "nvmf_delete_target", 00:05:55.941 "nvmf_create_target", 00:05:55.941 "nvmf_subsystem_allow_any_host", 00:05:55.941 "nvmf_subsystem_set_keys", 00:05:55.941 "nvmf_subsystem_remove_host", 00:05:55.941 "nvmf_subsystem_add_host", 00:05:55.941 "nvmf_ns_remove_host", 00:05:55.941 "nvmf_ns_add_host", 00:05:55.941 "nvmf_subsystem_remove_ns", 00:05:55.941 "nvmf_subsystem_set_ns_ana_group", 00:05:55.941 "nvmf_subsystem_add_ns", 00:05:55.941 "nvmf_subsystem_listener_set_ana_state", 00:05:55.941 "nvmf_discovery_get_referrals", 00:05:55.941 "nvmf_discovery_remove_referral", 00:05:55.941 "nvmf_discovery_add_referral", 00:05:55.941 "nvmf_subsystem_remove_listener", 00:05:55.941 "nvmf_subsystem_add_listener", 00:05:55.941 "nvmf_delete_subsystem", 00:05:55.941 "nvmf_create_subsystem", 00:05:55.941 "nvmf_get_subsystems", 00:05:55.941 "env_dpdk_get_mem_stats", 00:05:55.941 "nbd_get_disks", 00:05:55.941 "nbd_stop_disk", 00:05:55.941 "nbd_start_disk", 00:05:55.941 "ublk_recover_disk", 00:05:55.941 "ublk_get_disks", 00:05:55.941 "ublk_stop_disk", 00:05:55.941 "ublk_start_disk", 00:05:55.941 "ublk_destroy_target", 00:05:55.941 "ublk_create_target", 00:05:55.941 "virtio_blk_create_transport", 00:05:55.941 "virtio_blk_get_transports", 00:05:55.941 "vhost_controller_set_coalescing", 00:05:55.941 "vhost_get_controllers", 00:05:55.941 "vhost_delete_controller", 00:05:55.941 "vhost_create_blk_controller", 00:05:55.941 "vhost_scsi_controller_remove_target", 00:05:55.941 "vhost_scsi_controller_add_target", 00:05:55.941 "vhost_start_scsi_controller", 00:05:55.941 "vhost_create_scsi_controller", 00:05:55.941 "thread_set_cpumask", 00:05:55.941 "scheduler_set_options", 00:05:55.941 "framework_get_governor", 00:05:55.941 "framework_get_scheduler", 00:05:55.941 "framework_set_scheduler", 00:05:55.941 "framework_get_reactors", 00:05:55.941 "thread_get_io_channels", 00:05:55.941 "thread_get_pollers", 00:05:55.941 "thread_get_stats", 00:05:55.941 "framework_monitor_context_switch", 00:05:55.941 "spdk_kill_instance", 00:05:55.941 "log_enable_timestamps", 00:05:55.941 "log_get_flags", 00:05:55.941 "log_clear_flag", 00:05:55.941 "log_set_flag", 00:05:55.941 "log_get_level", 00:05:55.941 "log_set_level", 00:05:55.941 "log_get_print_level", 00:05:55.941 "log_set_print_level", 00:05:55.941 "framework_enable_cpumask_locks", 00:05:55.941 "framework_disable_cpumask_locks", 00:05:55.941 "framework_wait_init", 00:05:55.941 "framework_start_init", 00:05:55.941 "scsi_get_devices", 00:05:55.941 "bdev_get_histogram", 00:05:55.941 "bdev_enable_histogram", 00:05:55.941 "bdev_set_qos_limit", 00:05:55.941 "bdev_set_qd_sampling_period", 00:05:55.941 "bdev_get_bdevs", 00:05:55.941 "bdev_reset_iostat", 00:05:55.941 "bdev_get_iostat", 00:05:55.941 "bdev_examine", 00:05:55.941 "bdev_wait_for_examine", 00:05:55.941 "bdev_set_options", 00:05:55.941 "accel_get_stats", 00:05:55.941 "accel_set_options", 00:05:55.941 "accel_set_driver", 00:05:55.941 "accel_crypto_key_destroy", 00:05:55.941 "accel_crypto_keys_get", 00:05:55.941 "accel_crypto_key_create", 00:05:55.941 "accel_assign_opc", 00:05:55.941 "accel_get_module_info", 00:05:55.941 "accel_get_opc_assignments", 00:05:55.941 "vmd_rescan", 00:05:55.941 "vmd_remove_device", 00:05:55.941 "vmd_enable", 00:05:55.941 "sock_get_default_impl", 00:05:55.941 "sock_set_default_impl", 00:05:55.941 "sock_impl_set_options", 00:05:55.941 "sock_impl_get_options", 00:05:55.941 "iobuf_get_stats", 00:05:55.941 "iobuf_set_options", 00:05:55.941 "keyring_get_keys", 00:05:55.941 "framework_get_pci_devices", 00:05:55.941 "framework_get_config", 00:05:55.941 "framework_get_subsystems", 00:05:55.941 "fsdev_set_opts", 00:05:55.941 "fsdev_get_opts", 00:05:55.941 "trace_get_info", 00:05:55.941 "trace_get_tpoint_group_mask", 00:05:55.941 "trace_disable_tpoint_group", 00:05:55.941 "trace_enable_tpoint_group", 00:05:55.941 "trace_clear_tpoint_mask", 00:05:55.941 "trace_set_tpoint_mask", 00:05:55.941 "notify_get_notifications", 00:05:55.941 "notify_get_types", 00:05:55.941 "spdk_get_version", 00:05:55.941 "rpc_get_methods" 00:05:55.941 ] 00:05:56.199 10:04:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.199 10:04:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.199 10:04:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58049 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58049 ']' 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58049 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58049 00:05:56.199 killing process with pid 58049 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.199 10:04:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.200 10:04:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58049' 00:05:56.200 10:04:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58049 00:05:56.200 10:04:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58049 00:05:57.574 ************************************ 00:05:57.574 END TEST spdkcli_tcp 00:05:57.574 ************************************ 00:05:57.574 00:05:57.574 real 0m2.492s 00:05:57.574 user 0m4.436s 00:05:57.574 sys 0m0.427s 00:05:57.574 10:05:00 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.574 10:05:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.574 10:05:00 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.574 10:05:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.574 10:05:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.574 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:57.574 ************************************ 00:05:57.574 START TEST dpdk_mem_utility 00:05:57.574 ************************************ 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.574 * Looking for test storage... 00:05:57.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:57.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.574 10:05:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.574 --rc genhtml_branch_coverage=1 00:05:57.574 --rc genhtml_function_coverage=1 00:05:57.574 --rc genhtml_legend=1 00:05:57.574 --rc geninfo_all_blocks=1 00:05:57.574 --rc geninfo_unexecuted_blocks=1 00:05:57.574 00:05:57.574 ' 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.574 --rc genhtml_branch_coverage=1 00:05:57.574 --rc genhtml_function_coverage=1 00:05:57.574 --rc genhtml_legend=1 00:05:57.574 --rc geninfo_all_blocks=1 00:05:57.574 --rc geninfo_unexecuted_blocks=1 00:05:57.574 00:05:57.574 ' 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.574 --rc genhtml_branch_coverage=1 00:05:57.574 --rc genhtml_function_coverage=1 00:05:57.574 --rc genhtml_legend=1 00:05:57.574 --rc geninfo_all_blocks=1 00:05:57.574 --rc geninfo_unexecuted_blocks=1 00:05:57.574 00:05:57.574 ' 00:05:57.574 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.575 --rc genhtml_branch_coverage=1 00:05:57.575 --rc genhtml_function_coverage=1 00:05:57.575 --rc genhtml_legend=1 00:05:57.575 --rc geninfo_all_blocks=1 00:05:57.575 --rc geninfo_unexecuted_blocks=1 00:05:57.575 00:05:57.575 ' 00:05:57.575 10:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.575 10:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58154 00:05:57.575 10:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58154 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58154 ']' 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.575 10:05:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.575 10:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.575 [2024-10-17 10:05:00.562089] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:05:57.575 [2024-10-17 10:05:00.562220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58154 ] 00:05:57.833 [2024-10-17 10:05:00.703886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.833 [2024-10-17 10:05:00.786754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.400 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.400 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:58.400 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.400 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.400 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.400 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.400 { 00:05:58.400 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.400 } 00:05:58.400 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.400 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.400 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:58.400 1 heaps totaling size 816.000000 MiB 00:05:58.400 size: 816.000000 MiB heap id: 0 00:05:58.400 end heaps---------- 00:05:58.400 9 mempools totaling size 595.772034 MiB 00:05:58.400 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.400 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.400 size: 92.545471 MiB name: bdev_io_58154 00:05:58.400 size: 50.003479 MiB name: msgpool_58154 00:05:58.400 size: 36.509338 MiB name: fsdev_io_58154 00:05:58.400 size: 21.763794 MiB name: PDU_Pool 00:05:58.400 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.400 size: 4.133484 MiB name: evtpool_58154 00:05:58.400 size: 0.026123 MiB name: Session_Pool 00:05:58.400 end mempools------- 00:05:58.400 6 memzones totaling size 4.142822 MiB 00:05:58.400 size: 1.000366 MiB name: RG_ring_0_58154 00:05:58.400 size: 1.000366 MiB name: RG_ring_1_58154 00:05:58.400 size: 1.000366 MiB name: RG_ring_4_58154 00:05:58.400 size: 1.000366 MiB name: RG_ring_5_58154 00:05:58.400 size: 0.125366 MiB name: RG_ring_2_58154 00:05:58.400 size: 0.015991 MiB name: RG_ring_3_58154 00:05:58.400 end memzones------- 00:05:58.400 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.400 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:05:58.400 list of free elements. size: 16.791138 MiB 00:05:58.400 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:58.400 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:58.400 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:58.400 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:58.400 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:58.400 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:58.400 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:58.400 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:58.400 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:58.400 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:58.400 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:58.400 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:05:58.400 element at address: 0x200000c00000 with size: 0.492371 MiB 00:05:58.400 element at address: 0x200018e00000 with size: 0.488464 MiB 00:05:58.400 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:58.400 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:58.400 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:58.400 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:58.400 list of standard malloc elements. size: 199.287964 MiB 00:05:58.400 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:58.400 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:58.400 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:58.400 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:58.400 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:58.400 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:58.400 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:58.400 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:58.400 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:58.400 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:58.400 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:58.400 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:58.400 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:58.401 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:58.402 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:58.402 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:58.402 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:58.402 list of memzone associated elements. size: 599.920898 MiB 00:05:58.402 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:58.402 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.402 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:58.402 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.402 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:58.402 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58154_0 00:05:58.402 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:58.402 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58154_0 00:05:58.402 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:58.402 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58154_0 00:05:58.402 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:58.402 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.402 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:58.402 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.402 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:58.402 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58154_0 00:05:58.402 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:58.402 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58154 00:05:58.402 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:58.402 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58154 00:05:58.402 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:58.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.402 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:58.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.402 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:58.402 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.402 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:58.402 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.402 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:58.402 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58154 00:05:58.402 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:58.403 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58154 00:05:58.403 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:58.403 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58154 00:05:58.403 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:58.403 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58154 00:05:58.403 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:58.403 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58154 00:05:58.403 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:58.403 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58154 00:05:58.403 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:58.403 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.403 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:58.403 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.403 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:58.403 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.403 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:58.403 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58154 00:05:58.403 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:58.403 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58154 00:05:58.403 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:58.403 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.403 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:58.403 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.403 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:58.403 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58154 00:05:58.403 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:58.403 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.403 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:58.403 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58154 00:05:58.403 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:58.403 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58154 00:05:58.403 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:58.403 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58154 00:05:58.403 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:58.403 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.403 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.403 10:05:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58154 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58154 ']' 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58154 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58154 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58154' 00:05:58.403 killing process with pid 58154 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58154 00:05:58.403 10:05:01 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58154 00:05:59.790 00:05:59.790 real 0m2.333s 00:05:59.790 user 0m2.290s 00:05:59.790 sys 0m0.375s 00:05:59.790 10:05:02 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.790 10:05:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.790 ************************************ 00:05:59.790 END TEST dpdk_mem_utility 00:05:59.790 ************************************ 00:05:59.790 10:05:02 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.790 10:05:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.790 10:05:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.790 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:59.790 ************************************ 00:05:59.790 START TEST event 00:05:59.790 ************************************ 00:05:59.790 10:05:02 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:59.790 * Looking for test storage... 00:05:59.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.790 10:05:02 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:59.790 10:05:02 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:59.790 10:05:02 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:59.790 10:05:02 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:59.790 10:05:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.790 10:05:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.790 10:05:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.790 10:05:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.790 10:05:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.790 10:05:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.790 10:05:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.790 10:05:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.791 10:05:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.791 10:05:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.791 10:05:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.791 10:05:02 event -- scripts/common.sh@344 -- # case "$op" in 00:05:59.791 10:05:02 event -- scripts/common.sh@345 -- # : 1 00:05:59.791 10:05:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.791 10:05:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.791 10:05:02 event -- scripts/common.sh@365 -- # decimal 1 00:05:59.791 10:05:02 event -- scripts/common.sh@353 -- # local d=1 00:05:59.791 10:05:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.791 10:05:02 event -- scripts/common.sh@355 -- # echo 1 00:05:59.791 10:05:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.791 10:05:02 event -- scripts/common.sh@366 -- # decimal 2 00:05:59.791 10:05:02 event -- scripts/common.sh@353 -- # local d=2 00:05:59.791 10:05:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.791 10:05:02 event -- scripts/common.sh@355 -- # echo 2 00:05:59.791 10:05:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.791 10:05:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.791 10:05:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.791 10:05:02 event -- scripts/common.sh@368 -- # return 0 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.791 --rc genhtml_branch_coverage=1 00:05:59.791 --rc genhtml_function_coverage=1 00:05:59.791 --rc genhtml_legend=1 00:05:59.791 --rc geninfo_all_blocks=1 00:05:59.791 --rc geninfo_unexecuted_blocks=1 00:05:59.791 00:05:59.791 ' 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.791 --rc genhtml_branch_coverage=1 00:05:59.791 --rc genhtml_function_coverage=1 00:05:59.791 --rc genhtml_legend=1 00:05:59.791 --rc geninfo_all_blocks=1 00:05:59.791 --rc geninfo_unexecuted_blocks=1 00:05:59.791 00:05:59.791 ' 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.791 --rc genhtml_branch_coverage=1 00:05:59.791 --rc genhtml_function_coverage=1 00:05:59.791 --rc genhtml_legend=1 00:05:59.791 --rc geninfo_all_blocks=1 00:05:59.791 --rc geninfo_unexecuted_blocks=1 00:05:59.791 00:05:59.791 ' 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:59.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.791 --rc genhtml_branch_coverage=1 00:05:59.791 --rc genhtml_function_coverage=1 00:05:59.791 --rc genhtml_legend=1 00:05:59.791 --rc geninfo_all_blocks=1 00:05:59.791 --rc geninfo_unexecuted_blocks=1 00:05:59.791 00:05:59.791 ' 00:05:59.791 10:05:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:59.791 10:05:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.791 10:05:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:59.791 10:05:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.791 10:05:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.791 ************************************ 00:05:59.791 START TEST event_perf 00:05:59.791 ************************************ 00:05:59.791 10:05:02 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.050 Running I/O for 1 seconds...[2024-10-17 10:05:02.893266] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:00.050 [2024-10-17 10:05:02.893910] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58246 ] 00:06:00.050 [2024-10-17 10:05:03.045799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.307 [2024-10-17 10:05:03.150334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.307 [2024-10-17 10:05:03.150834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.307 [2024-10-17 10:05:03.151076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.307 Running I/O for 1 seconds...[2024-10-17 10:05:03.151114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.689 00:06:01.689 lcore 0: 157726 00:06:01.689 lcore 1: 157721 00:06:01.689 lcore 2: 157723 00:06:01.689 lcore 3: 157725 00:06:01.689 done. 00:06:01.689 00:06:01.689 real 0m1.526s 00:06:01.689 user 0m4.314s 00:06:01.689 sys 0m0.086s 00:06:01.689 ************************************ 00:06:01.689 END TEST event_perf 00:06:01.689 ************************************ 00:06:01.689 10:05:04 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.689 10:05:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.689 10:05:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.689 10:05:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:01.689 10:05:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.689 10:05:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.689 ************************************ 00:06:01.689 START TEST event_reactor 00:06:01.689 ************************************ 00:06:01.689 10:05:04 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:01.689 [2024-10-17 10:05:04.470698] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:01.690 [2024-10-17 10:05:04.471016] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58285 ] 00:06:01.690 [2024-10-17 10:05:04.617627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.690 [2024-10-17 10:05:04.715434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.063 test_start 00:06:03.063 oneshot 00:06:03.063 tick 100 00:06:03.063 tick 100 00:06:03.063 tick 250 00:06:03.063 tick 100 00:06:03.063 tick 100 00:06:03.063 tick 100 00:06:03.063 tick 250 00:06:03.063 tick 500 00:06:03.063 tick 100 00:06:03.063 tick 100 00:06:03.063 tick 250 00:06:03.063 tick 100 00:06:03.063 tick 100 00:06:03.063 test_end 00:06:03.063 00:06:03.063 real 0m1.436s 00:06:03.063 user 0m1.256s 00:06:03.063 sys 0m0.072s 00:06:03.063 10:05:05 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.063 ************************************ 00:06:03.063 END TEST event_reactor 00:06:03.063 ************************************ 00:06:03.063 10:05:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:03.063 10:05:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.063 10:05:05 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:03.063 10:05:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.063 10:05:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.063 ************************************ 00:06:03.063 START TEST event_reactor_perf 00:06:03.063 ************************************ 00:06:03.063 10:05:05 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.063 [2024-10-17 10:05:05.953155] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:03.063 [2024-10-17 10:05:05.953383] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58322 ] 00:06:03.063 [2024-10-17 10:05:06.105736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.321 [2024-10-17 10:05:06.204353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.262 test_start 00:06:04.262 test_end 00:06:04.262 Performance: 313232 events per second 00:06:04.519 00:06:04.519 real 0m1.435s 00:06:04.519 user 0m1.265s 00:06:04.519 sys 0m0.062s 00:06:04.519 10:05:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.519 10:05:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 END TEST event_reactor_perf 00:06:04.519 ************************************ 00:06:04.519 10:05:07 event -- event/event.sh@49 -- # uname -s 00:06:04.519 10:05:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.519 10:05:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.519 10:05:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.519 10:05:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.519 10:05:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 START TEST event_scheduler 00:06:04.519 ************************************ 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.519 * Looking for test storage... 00:06:04.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.519 10:05:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.519 10:05:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.519 --rc genhtml_branch_coverage=1 00:06:04.520 --rc genhtml_function_coverage=1 00:06:04.520 --rc genhtml_legend=1 00:06:04.520 --rc geninfo_all_blocks=1 00:06:04.520 --rc geninfo_unexecuted_blocks=1 00:06:04.520 00:06:04.520 ' 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.520 --rc genhtml_branch_coverage=1 00:06:04.520 --rc genhtml_function_coverage=1 00:06:04.520 --rc genhtml_legend=1 00:06:04.520 --rc geninfo_all_blocks=1 00:06:04.520 --rc geninfo_unexecuted_blocks=1 00:06:04.520 00:06:04.520 ' 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.520 --rc genhtml_branch_coverage=1 00:06:04.520 --rc genhtml_function_coverage=1 00:06:04.520 --rc genhtml_legend=1 00:06:04.520 --rc geninfo_all_blocks=1 00:06:04.520 --rc geninfo_unexecuted_blocks=1 00:06:04.520 00:06:04.520 ' 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.520 --rc genhtml_branch_coverage=1 00:06:04.520 --rc genhtml_function_coverage=1 00:06:04.520 --rc genhtml_legend=1 00:06:04.520 --rc geninfo_all_blocks=1 00:06:04.520 --rc geninfo_unexecuted_blocks=1 00:06:04.520 00:06:04.520 ' 00:06:04.520 10:05:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.520 10:05:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58392 00:06:04.520 10:05:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.520 10:05:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.520 10:05:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58392 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58392 ']' 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.520 10:05:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.778 [2024-10-17 10:05:07.611949] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:04.778 [2024-10-17 10:05:07.612140] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58392 ] 00:06:04.778 [2024-10-17 10:05:07.760896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.778 [2024-10-17 10:05:07.854160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.778 [2024-10-17 10:05:07.854611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.778 [2024-10-17 10:05:07.854696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.778 [2024-10-17 10:05:07.854673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:05.714 10:05:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.714 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.714 POWER: Cannot set governor of lcore 0 to performance 00:06:05.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.714 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.714 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.714 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:05.714 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:05.714 POWER: Unable to set Power Management Environment for lcore 0 00:06:05.714 [2024-10-17 10:05:08.468541] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:05.714 [2024-10-17 10:05:08.468559] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:05.714 [2024-10-17 10:05:08.468568] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.714 [2024-10-17 10:05:08.468582] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.714 [2024-10-17 10:05:08.468588] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.714 [2024-10-17 10:05:08.468595] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.714 10:05:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.714 [2024-10-17 10:05:08.665452] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.714 10:05:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.714 10:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 ************************************ 00:06:05.715 START TEST scheduler_create_thread 00:06:05.715 ************************************ 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 2 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 3 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 4 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 5 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 6 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 7 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 8 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 9 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 10 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.715 10:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.281 10:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.281 10:05:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.281 10:05:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.281 10:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.281 10:05:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.654 ************************************ 00:06:07.654 END TEST scheduler_create_thread 00:06:07.654 ************************************ 00:06:07.654 10:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.654 00:06:07.654 real 0m1.756s 00:06:07.654 user 0m0.013s 00:06:07.654 sys 0m0.006s 00:06:07.654 10:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.654 10:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.654 10:05:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.654 10:05:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58392 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58392 ']' 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58392 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58392 00:06:07.654 killing process with pid 58392 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58392' 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58392 00:06:07.654 10:05:10 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58392 00:06:07.911 [2024-10-17 10:05:10.906950] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.476 ************************************ 00:06:08.476 END TEST event_scheduler 00:06:08.476 ************************************ 00:06:08.476 00:06:08.476 real 0m4.093s 00:06:08.476 user 0m6.888s 00:06:08.476 sys 0m0.349s 00:06:08.476 10:05:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.476 10:05:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.476 10:05:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.476 10:05:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.476 10:05:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.476 10:05:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.476 10:05:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.476 ************************************ 00:06:08.476 START TEST app_repeat 00:06:08.476 ************************************ 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58487 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.476 Process app_repeat pid: 58487 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58487' 00:06:08.476 spdk_app_start Round 0 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58487 ']' 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.476 10:05:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.476 10:05:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.732 [2024-10-17 10:05:11.575648] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:08.732 [2024-10-17 10:05:11.575768] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:06:08.732 [2024-10-17 10:05:11.726751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.990 [2024-10-17 10:05:11.832685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.990 [2024-10-17 10:05:11.832892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.554 10:05:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.554 10:05:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:09.554 10:05:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.820 Malloc0 00:06:09.820 10:05:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.077 Malloc1 00:06:10.077 10:05:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.077 10:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.077 /dev/nbd0 00:06:10.334 10:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.334 10:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.334 10:05:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.335 1+0 records in 00:06:10.335 1+0 records out 00:06:10.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040326 s, 10.2 MB/s 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.335 10:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.335 10:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.335 10:05:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.335 /dev/nbd1 00:06:10.335 10:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.335 10:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.335 10:05:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.592 1+0 records in 00:06:10.592 1+0 records out 00:06:10.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016265 s, 25.2 MB/s 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.592 10:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.592 10:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.592 { 00:06:10.592 "nbd_device": "/dev/nbd0", 00:06:10.592 "bdev_name": "Malloc0" 00:06:10.592 }, 00:06:10.592 { 00:06:10.592 "nbd_device": "/dev/nbd1", 00:06:10.592 "bdev_name": "Malloc1" 00:06:10.593 } 00:06:10.593 ]' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.593 { 00:06:10.593 "nbd_device": "/dev/nbd0", 00:06:10.593 "bdev_name": "Malloc0" 00:06:10.593 }, 00:06:10.593 { 00:06:10.593 "nbd_device": "/dev/nbd1", 00:06:10.593 "bdev_name": "Malloc1" 00:06:10.593 } 00:06:10.593 ]' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.593 /dev/nbd1' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.593 /dev/nbd1' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.593 10:05:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.852 256+0 records in 00:06:10.852 256+0 records out 00:06:10.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00904798 s, 116 MB/s 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.852 256+0 records in 00:06:10.852 256+0 records out 00:06:10.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204156 s, 51.4 MB/s 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.852 256+0 records in 00:06:10.852 256+0 records out 00:06:10.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212176 s, 49.4 MB/s 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.852 10:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.114 10:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.373 10:05:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.373 10:05:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.631 10:05:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.209 [2024-10-17 10:05:15.249847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.469 [2024-10-17 10:05:15.331235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.469 [2024-10-17 10:05:15.331467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.469 [2024-10-17 10:05:15.433952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.469 [2024-10-17 10:05:15.434036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.044 10:05:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.044 spdk_app_start Round 1 00:06:15.044 10:05:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.044 10:05:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58487 ']' 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.044 10:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.044 10:05:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.044 Malloc0 00:06:15.044 10:05:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.305 Malloc1 00:06:15.305 10:05:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.305 10:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.305 /dev/nbd0 00:06:15.567 10:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.567 10:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.567 10:05:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.568 1+0 records in 00:06:15.568 1+0 records out 00:06:15.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556613 s, 7.4 MB/s 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.568 /dev/nbd1 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.568 1+0 records in 00:06:15.568 1+0 records out 00:06:15.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545795 s, 7.5 MB/s 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.568 10:05:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.568 10:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.829 { 00:06:15.829 "nbd_device": "/dev/nbd0", 00:06:15.829 "bdev_name": "Malloc0" 00:06:15.829 }, 00:06:15.829 { 00:06:15.829 "nbd_device": "/dev/nbd1", 00:06:15.829 "bdev_name": "Malloc1" 00:06:15.829 } 00:06:15.829 ]' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.829 { 00:06:15.829 "nbd_device": "/dev/nbd0", 00:06:15.829 "bdev_name": "Malloc0" 00:06:15.829 }, 00:06:15.829 { 00:06:15.829 "nbd_device": "/dev/nbd1", 00:06:15.829 "bdev_name": "Malloc1" 00:06:15.829 } 00:06:15.829 ]' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.829 /dev/nbd1' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.829 /dev/nbd1' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.829 256+0 records in 00:06:15.829 256+0 records out 00:06:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790184 s, 133 MB/s 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.829 256+0 records in 00:06:15.829 256+0 records out 00:06:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188891 s, 55.5 MB/s 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.829 256+0 records in 00:06:15.829 256+0 records out 00:06:15.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177039 s, 59.2 MB/s 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.829 10:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.087 10:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.347 10:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.608 10:05:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.608 10:05:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.866 10:05:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.477 [2024-10-17 10:05:20.421598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.477 [2024-10-17 10:05:20.505379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.477 [2024-10-17 10:05:20.505490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.736 [2024-10-17 10:05:20.608258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.736 [2024-10-17 10:05:20.608499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.263 spdk_app_start Round 2 00:06:20.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.263 10:05:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.263 10:05:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.263 10:05:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58487 ']' 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.263 10:05:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.263 10:05:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.263 10:05:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.263 10:05:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.263 Malloc0 00:06:20.263 10:05:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.522 Malloc1 00:06:20.522 10:05:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.522 10:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.781 /dev/nbd0 00:06:20.781 10:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.781 10:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.781 10:05:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.782 1+0 records in 00:06:20.782 1+0 records out 00:06:20.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278007 s, 14.7 MB/s 00:06:20.782 10:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.782 10:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.782 10:05:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.782 10:05:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.782 10:05:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.782 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.782 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.782 10:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.042 /dev/nbd1 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.042 1+0 records in 00:06:21.042 1+0 records out 00:06:21.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296743 s, 13.8 MB/s 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.042 10:05:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.042 10:05:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.301 { 00:06:21.301 "nbd_device": "/dev/nbd0", 00:06:21.301 "bdev_name": "Malloc0" 00:06:21.301 }, 00:06:21.301 { 00:06:21.301 "nbd_device": "/dev/nbd1", 00:06:21.301 "bdev_name": "Malloc1" 00:06:21.301 } 00:06:21.301 ]' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.301 { 00:06:21.301 "nbd_device": "/dev/nbd0", 00:06:21.301 "bdev_name": "Malloc0" 00:06:21.301 }, 00:06:21.301 { 00:06:21.301 "nbd_device": "/dev/nbd1", 00:06:21.301 "bdev_name": "Malloc1" 00:06:21.301 } 00:06:21.301 ]' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.301 /dev/nbd1' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.301 /dev/nbd1' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.301 256+0 records in 00:06:21.301 256+0 records out 00:06:21.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048252 s, 217 MB/s 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.301 10:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.301 256+0 records in 00:06:21.301 256+0 records out 00:06:21.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196385 s, 53.4 MB/s 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.302 256+0 records in 00:06:21.302 256+0 records out 00:06:21.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161559 s, 64.9 MB/s 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.302 10:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.561 10:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.821 10:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.821 10:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.821 10:05:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.821 10:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.821 10:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.822 10:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.083 10:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.083 10:05:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.083 10:05:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.392 10:05:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.987 [2024-10-17 10:05:26.029523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.247 [2024-10-17 10:05:26.129167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.248 [2024-10-17 10:05:26.129491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.248 [2024-10-17 10:05:26.252658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.248 [2024-10-17 10:05:26.252756] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.788 10:05:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58487 /var/tmp/spdk-nbd.sock 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58487 ']' 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.788 10:05:28 event.app_repeat -- event/event.sh@39 -- # killprocess 58487 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58487 ']' 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58487 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58487 00:06:25.788 killing process with pid 58487 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58487' 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58487 00:06:25.788 10:05:28 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58487 00:06:26.047 spdk_app_start is called in Round 0. 00:06:26.047 Shutdown signal received, stop current app iteration 00:06:26.047 Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 reinitialization... 00:06:26.047 spdk_app_start is called in Round 1. 00:06:26.047 Shutdown signal received, stop current app iteration 00:06:26.047 Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 reinitialization... 00:06:26.047 spdk_app_start is called in Round 2. 00:06:26.047 Shutdown signal received, stop current app iteration 00:06:26.047 Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 reinitialization... 00:06:26.047 spdk_app_start is called in Round 3. 00:06:26.047 Shutdown signal received, stop current app iteration 00:06:26.047 10:05:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.047 10:05:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.047 00:06:26.047 real 0m17.580s 00:06:26.047 user 0m38.311s 00:06:26.047 sys 0m2.013s 00:06:26.047 10:05:29 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.047 10:05:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.047 ************************************ 00:06:26.047 END TEST app_repeat 00:06:26.047 ************************************ 00:06:26.305 10:05:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.305 10:05:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.305 10:05:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.305 10:05:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.305 10:05:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.305 ************************************ 00:06:26.305 START TEST cpu_locks 00:06:26.305 ************************************ 00:06:26.305 10:05:29 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.305 * Looking for test storage... 00:06:26.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.305 10:05:29 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.305 10:05:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.305 10:05:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.305 10:05:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:26.305 10:05:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.306 10:05:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.306 --rc genhtml_branch_coverage=1 00:06:26.306 --rc genhtml_function_coverage=1 00:06:26.306 --rc genhtml_legend=1 00:06:26.306 --rc geninfo_all_blocks=1 00:06:26.306 --rc geninfo_unexecuted_blocks=1 00:06:26.306 00:06:26.306 ' 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.306 --rc genhtml_branch_coverage=1 00:06:26.306 --rc genhtml_function_coverage=1 00:06:26.306 --rc genhtml_legend=1 00:06:26.306 --rc geninfo_all_blocks=1 00:06:26.306 --rc geninfo_unexecuted_blocks=1 00:06:26.306 00:06:26.306 ' 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.306 --rc genhtml_branch_coverage=1 00:06:26.306 --rc genhtml_function_coverage=1 00:06:26.306 --rc genhtml_legend=1 00:06:26.306 --rc geninfo_all_blocks=1 00:06:26.306 --rc geninfo_unexecuted_blocks=1 00:06:26.306 00:06:26.306 ' 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.306 --rc genhtml_branch_coverage=1 00:06:26.306 --rc genhtml_function_coverage=1 00:06:26.306 --rc genhtml_legend=1 00:06:26.306 --rc geninfo_all_blocks=1 00:06:26.306 --rc geninfo_unexecuted_blocks=1 00:06:26.306 00:06:26.306 ' 00:06:26.306 10:05:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.306 10:05:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.306 10:05:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.306 10:05:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.306 10:05:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.306 ************************************ 00:06:26.306 START TEST default_locks 00:06:26.306 ************************************ 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58918 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58918 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58918 ']' 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.306 10:05:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.306 [2024-10-17 10:05:29.382499] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:26.306 [2024-10-17 10:05:29.382610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:06:26.565 [2024-10-17 10:05:29.527279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.565 [2024-10-17 10:05:29.611871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.138 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.138 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:27.138 10:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58918 00:06:27.138 10:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58918 00:06:27.138 10:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58918 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58918 ']' 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58918 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58918 00:06:27.398 killing process with pid 58918 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58918' 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58918 00:06:27.398 10:05:30 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58918 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58918 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58918 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58918 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58918 ']' 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.775 ERROR: process (pid: 58918) is no longer running 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58918) - No such process 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.775 ************************************ 00:06:28.775 END TEST default_locks 00:06:28.775 ************************************ 00:06:28.775 00:06:28.775 real 0m2.364s 00:06:28.775 user 0m2.389s 00:06:28.775 sys 0m0.417s 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.775 10:05:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.775 10:05:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.775 10:05:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.775 10:05:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.775 10:05:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.775 ************************************ 00:06:28.775 START TEST default_locks_via_rpc 00:06:28.775 ************************************ 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58971 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58971 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58971 ']' 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.775 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.776 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.776 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.776 10:05:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.776 [2024-10-17 10:05:31.839564] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:28.776 [2024-10-17 10:05:31.839739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:06:29.037 [2024-10-17 10:05:31.996477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.037 [2024-10-17 10:05:32.083931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58971 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58971 00:06:29.605 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58971 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58971 ']' 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58971 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58971 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.873 killing process with pid 58971 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58971' 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58971 00:06:29.873 10:05:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58971 00:06:31.275 00:06:31.275 real 0m2.412s 00:06:31.275 user 0m2.401s 00:06:31.275 sys 0m0.486s 00:06:31.275 10:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.275 10:05:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.275 ************************************ 00:06:31.275 END TEST default_locks_via_rpc 00:06:31.275 ************************************ 00:06:31.275 10:05:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.275 10:05:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.275 10:05:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.275 10:05:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.275 ************************************ 00:06:31.275 START TEST non_locking_app_on_locked_coremask 00:06:31.275 ************************************ 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59031 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59031 /var/tmp/spdk.sock 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59031 ']' 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.275 10:05:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.275 [2024-10-17 10:05:34.243362] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:31.275 [2024-10-17 10:05:34.243478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:06:31.537 [2024-10-17 10:05:34.385259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.537 [2024-10-17 10:05:34.471961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59039 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59039 /var/tmp/spdk2.sock 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59039 ']' 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.109 10:05:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.109 [2024-10-17 10:05:35.124559] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:32.109 [2024-10-17 10:05:35.124998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:06:32.370 [2024-10-17 10:05:35.267639] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.370 [2024-10-17 10:05:35.267687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.370 [2024-10-17 10:05:35.441643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59031 ']' 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.753 killing process with pid 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59031' 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59031 00:06:33.753 10:05:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59031 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59039 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59039 ']' 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59039 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59039 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.341 killing process with pid 59039 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59039' 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59039 00:06:36.341 10:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59039 00:06:37.725 00:06:37.725 real 0m6.258s 00:06:37.725 user 0m6.461s 00:06:37.725 sys 0m0.851s 00:06:37.725 10:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.725 10:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 ************************************ 00:06:37.725 END TEST non_locking_app_on_locked_coremask 00:06:37.725 ************************************ 00:06:37.725 10:05:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.725 10:05:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.725 10:05:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.725 10:05:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 ************************************ 00:06:37.725 START TEST locking_app_on_unlocked_coremask 00:06:37.725 ************************************ 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59141 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59141 /var/tmp/spdk.sock 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59141 ']' 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.725 10:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 [2024-10-17 10:05:40.555544] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:37.725 [2024-10-17 10:05:40.555690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:06:37.725 [2024-10-17 10:05:40.707226] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.725 [2024-10-17 10:05:40.707277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.725 [2024-10-17 10:05:40.798627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59152 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59152 /var/tmp/spdk2.sock 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 10:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.557 [2024-10-17 10:05:41.431445] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:38.557 [2024-10-17 10:05:41.431560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:06:38.557 [2024-10-17 10:05:41.579690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.818 [2024-10-17 10:05:41.754734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.762 10:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.762 10:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.762 10:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59152 00:06:39.762 10:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.762 10:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59152 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59141 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59141 ']' 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59141 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59141 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59141' 00:06:40.034 killing process with pid 59141 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59141 00:06:40.034 10:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59141 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59152 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59152 ']' 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59152 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59152 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.578 killing process with pid 59152 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59152' 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59152 00:06:42.578 10:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59152 00:06:43.965 00:06:43.965 real 0m6.299s 00:06:43.965 user 0m6.484s 00:06:43.965 sys 0m0.840s 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.965 ************************************ 00:06:43.965 END TEST locking_app_on_unlocked_coremask 00:06:43.965 ************************************ 00:06:43.965 10:05:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:43.965 10:05:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.965 10:05:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.965 10:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.965 ************************************ 00:06:43.965 START TEST locking_app_on_locked_coremask 00:06:43.965 ************************************ 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59248 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59248 /var/tmp/spdk.sock 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59248 ']' 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.965 10:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.965 [2024-10-17 10:05:46.897074] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:43.965 [2024-10-17 10:05:46.897190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:06:43.965 [2024-10-17 10:05:47.048683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.225 [2024-10-17 10:05:47.167529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59264 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59264 /var/tmp/spdk2.sock 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59264 /var/tmp/spdk2.sock 00:06:44.797 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59264 /var/tmp/spdk2.sock 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59264 ']' 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.798 10:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.798 [2024-10-17 10:05:47.832759] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:44.798 [2024-10-17 10:05:47.832898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:06:45.157 [2024-10-17 10:05:47.995353] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59248 has claimed it. 00:06:45.157 [2024-10-17 10:05:47.995417] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.417 ERROR: process (pid: 59264) is no longer running 00:06:45.417 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59264) - No such process 00:06:45.417 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.417 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:45.417 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:45.417 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.418 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.418 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.418 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59248 00:06:45.418 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59248 00:06:45.418 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59248 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59248 ']' 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59248 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59248 00:06:45.679 killing process with pid 59248 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59248' 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59248 00:06:45.679 10:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59248 00:06:47.065 00:06:47.065 real 0m3.319s 00:06:47.065 user 0m3.538s 00:06:47.065 sys 0m0.544s 00:06:47.065 10:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.065 10:05:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.065 ************************************ 00:06:47.065 END TEST locking_app_on_locked_coremask 00:06:47.065 ************************************ 00:06:47.420 10:05:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.420 10:05:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.420 10:05:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.420 10:05:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.420 ************************************ 00:06:47.420 START TEST locking_overlapped_coremask 00:06:47.420 ************************************ 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59317 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59317 /var/tmp/spdk.sock 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59317 ']' 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.420 10:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.420 [2024-10-17 10:05:50.278514] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:47.420 [2024-10-17 10:05:50.278665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59317 ] 00:06:47.420 [2024-10-17 10:05:50.430476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.681 [2024-10-17 10:05:50.533998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.681 [2024-10-17 10:05:50.534087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.681 [2024-10-17 10:05:50.534111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59335 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59335 /var/tmp/spdk2.sock 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59335 /var/tmp/spdk2.sock 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59335 /var/tmp/spdk2.sock 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59335 ']' 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.294 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.294 [2024-10-17 10:05:51.196540] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:48.294 [2024-10-17 10:05:51.196667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59335 ] 00:06:48.294 [2024-10-17 10:05:51.352760] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59317 has claimed it. 00:06:48.294 [2024-10-17 10:05:51.352946] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59335) - No such process 00:06:48.866 ERROR: process (pid: 59335) is no longer running 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59317 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59317 ']' 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59317 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59317 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.866 killing process with pid 59317 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59317' 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59317 00:06:48.866 10:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59317 00:06:50.253 00:06:50.253 real 0m3.003s 00:06:50.253 user 0m8.206s 00:06:50.253 sys 0m0.420s 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 ************************************ 00:06:50.253 END TEST locking_overlapped_coremask 00:06:50.253 ************************************ 00:06:50.253 10:05:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.253 10:05:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.253 10:05:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.253 10:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 ************************************ 00:06:50.253 START TEST locking_overlapped_coremask_via_rpc 00:06:50.253 ************************************ 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59388 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59388 /var/tmp/spdk.sock 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59388 ']' 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.253 10:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 [2024-10-17 10:05:53.312651] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:50.253 [2024-10-17 10:05:53.312779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59388 ] 00:06:50.515 [2024-10-17 10:05:53.462524] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.515 [2024-10-17 10:05:53.462576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.515 [2024-10-17 10:05:53.551025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.515 [2024-10-17 10:05:53.551235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.515 [2024-10-17 10:05:53.551430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59406 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59406 /var/tmp/spdk2.sock 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59406 ']' 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.087 10:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.348 [2024-10-17 10:05:54.210737] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:51.348 [2024-10-17 10:05:54.210857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59406 ] 00:06:51.348 [2024-10-17 10:05:54.366257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.348 [2024-10-17 10:05:54.366428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.609 [2024-10-17 10:05:54.579251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.609 [2024-10-17 10:05:54.579271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.609 [2024-10-17 10:05:54.579286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 [2024-10-17 10:05:55.735186] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59388 has claimed it. 00:06:52.995 request: 00:06:52.995 { 00:06:52.995 "method": "framework_enable_cpumask_locks", 00:06:52.995 "req_id": 1 00:06:52.995 } 00:06:52.995 Got JSON-RPC error response 00:06:52.995 response: 00:06:52.995 { 00:06:52.995 "code": -32603, 00:06:52.995 "message": "Failed to claim CPU core: 2" 00:06:52.995 } 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59388 /var/tmp/spdk.sock 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59388 ']' 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59406 /var/tmp/spdk2.sock 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59406 ']' 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.995 10:05:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.256 00:06:53.256 real 0m2.946s 00:06:53.256 user 0m1.164s 00:06:53.256 sys 0m0.130s 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.256 10:05:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.256 ************************************ 00:06:53.256 END TEST locking_overlapped_coremask_via_rpc 00:06:53.256 ************************************ 00:06:53.256 10:05:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.256 10:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59388 ]] 00:06:53.256 10:05:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59388 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59388 ']' 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59388 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59388 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59388' 00:06:53.256 killing process with pid 59388 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59388 00:06:53.256 10:05:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59388 00:06:54.643 10:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59406 ]] 00:06:54.643 10:05:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59406 00:06:54.643 10:05:57 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59406 ']' 00:06:54.643 10:05:57 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59406 00:06:54.643 10:05:57 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:54.643 10:05:57 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.643 10:05:57 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59406 00:06:54.644 10:05:57 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:54.644 10:05:57 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:54.644 killing process with pid 59406 00:06:54.644 10:05:57 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59406' 00:06:54.644 10:05:57 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59406 00:06:54.644 10:05:57 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59406 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59388 ]] 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59388 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59388 ']' 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59388 00:06:56.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59388) - No such process 00:06:56.027 Process with pid 59388 is not found 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59388 is not found' 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59406 ]] 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59406 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59406 ']' 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59406 00:06:56.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59406) - No such process 00:06:56.027 Process with pid 59406 is not found 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59406 is not found' 00:06:56.027 10:05:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.027 00:06:56.027 real 0m29.543s 00:06:56.027 user 0m51.200s 00:06:56.027 sys 0m4.467s 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.027 10:05:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.027 ************************************ 00:06:56.027 END TEST cpu_locks 00:06:56.027 ************************************ 00:06:56.027 ************************************ 00:06:56.027 END TEST event 00:06:56.027 ************************************ 00:06:56.027 00:06:56.027 real 0m56.022s 00:06:56.027 user 1m43.399s 00:06:56.027 sys 0m7.268s 00:06:56.027 10:05:58 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.027 10:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.027 10:05:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:56.027 10:05:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.027 10:05:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.027 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:56.027 ************************************ 00:06:56.027 START TEST thread 00:06:56.027 ************************************ 00:06:56.027 10:05:58 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:56.027 * Looking for test storage... 00:06:56.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:56.027 10:05:58 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.028 10:05:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.028 10:05:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.028 10:05:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.028 10:05:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.028 10:05:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.028 10:05:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.028 10:05:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.028 10:05:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.028 10:05:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.028 10:05:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.028 10:05:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.028 10:05:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:56.028 10:05:58 thread -- scripts/common.sh@345 -- # : 1 00:06:56.028 10:05:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.028 10:05:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.028 10:05:58 thread -- scripts/common.sh@365 -- # decimal 1 00:06:56.028 10:05:58 thread -- scripts/common.sh@353 -- # local d=1 00:06:56.028 10:05:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.028 10:05:58 thread -- scripts/common.sh@355 -- # echo 1 00:06:56.028 10:05:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.028 10:05:58 thread -- scripts/common.sh@366 -- # decimal 2 00:06:56.028 10:05:58 thread -- scripts/common.sh@353 -- # local d=2 00:06:56.028 10:05:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.028 10:05:58 thread -- scripts/common.sh@355 -- # echo 2 00:06:56.028 10:05:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.028 10:05:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.028 10:05:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.028 10:05:58 thread -- scripts/common.sh@368 -- # return 0 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.028 --rc genhtml_branch_coverage=1 00:06:56.028 --rc genhtml_function_coverage=1 00:06:56.028 --rc genhtml_legend=1 00:06:56.028 --rc geninfo_all_blocks=1 00:06:56.028 --rc geninfo_unexecuted_blocks=1 00:06:56.028 00:06:56.028 ' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.028 --rc genhtml_branch_coverage=1 00:06:56.028 --rc genhtml_function_coverage=1 00:06:56.028 --rc genhtml_legend=1 00:06:56.028 --rc geninfo_all_blocks=1 00:06:56.028 --rc geninfo_unexecuted_blocks=1 00:06:56.028 00:06:56.028 ' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.028 --rc genhtml_branch_coverage=1 00:06:56.028 --rc genhtml_function_coverage=1 00:06:56.028 --rc genhtml_legend=1 00:06:56.028 --rc geninfo_all_blocks=1 00:06:56.028 --rc geninfo_unexecuted_blocks=1 00:06:56.028 00:06:56.028 ' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.028 --rc genhtml_branch_coverage=1 00:06:56.028 --rc genhtml_function_coverage=1 00:06:56.028 --rc genhtml_legend=1 00:06:56.028 --rc geninfo_all_blocks=1 00:06:56.028 --rc geninfo_unexecuted_blocks=1 00:06:56.028 00:06:56.028 ' 00:06:56.028 10:05:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.028 10:05:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.028 ************************************ 00:06:56.028 START TEST thread_poller_perf 00:06:56.028 ************************************ 00:06:56.028 10:05:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:56.028 [2024-10-17 10:05:58.937765] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:56.028 [2024-10-17 10:05:58.937878] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:06:56.028 [2024-10-17 10:05:59.087313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.287 [2024-10-17 10:05:59.189781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.287 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.678 [2024-10-17T10:06:00.769Z] ====================================== 00:06:57.678 [2024-10-17T10:06:00.769Z] busy:2612518906 (cyc) 00:06:57.678 [2024-10-17T10:06:00.769Z] total_run_count: 305000 00:06:57.678 [2024-10-17T10:06:00.769Z] tsc_hz: 2600000000 (cyc) 00:06:57.678 [2024-10-17T10:06:00.769Z] ====================================== 00:06:57.678 [2024-10-17T10:06:00.769Z] poller_cost: 8565 (cyc), 3294 (nsec) 00:06:57.678 00:06:57.678 real 0m1.444s 00:06:57.678 user 0m1.273s 00:06:57.678 sys 0m0.064s 00:06:57.678 10:06:00 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.678 10:06:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.678 ************************************ 00:06:57.678 END TEST thread_poller_perf 00:06:57.678 ************************************ 00:06:57.678 10:06:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.678 10:06:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.678 10:06:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.678 10:06:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.678 ************************************ 00:06:57.678 START TEST thread_poller_perf 00:06:57.678 ************************************ 00:06:57.678 10:06:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.678 [2024-10-17 10:06:00.423587] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:57.678 [2024-10-17 10:06:00.423701] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59597 ] 00:06:57.678 [2024-10-17 10:06:00.573185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.678 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.678 [2024-10-17 10:06:00.747550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.060 [2024-10-17T10:06:02.151Z] ====================================== 00:06:59.060 [2024-10-17T10:06:02.151Z] busy:2605559000 (cyc) 00:06:59.060 [2024-10-17T10:06:02.151Z] total_run_count: 3781000 00:06:59.060 [2024-10-17T10:06:02.151Z] tsc_hz: 2600000000 (cyc) 00:06:59.060 [2024-10-17T10:06:02.151Z] ====================================== 00:06:59.060 [2024-10-17T10:06:02.151Z] poller_cost: 689 (cyc), 265 (nsec) 00:06:59.060 00:06:59.060 real 0m1.513s 00:06:59.060 user 0m1.327s 00:06:59.060 sys 0m0.078s 00:06:59.060 10:06:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.060 ************************************ 00:06:59.060 END TEST thread_poller_perf 00:06:59.060 ************************************ 00:06:59.060 10:06:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.060 10:06:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:59.060 00:06:59.060 real 0m3.175s 00:06:59.060 user 0m2.708s 00:06:59.060 sys 0m0.257s 00:06:59.060 10:06:01 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.060 10:06:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.060 ************************************ 00:06:59.060 END TEST thread 00:06:59.060 ************************************ 00:06:59.060 10:06:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:59.060 10:06:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:59.060 10:06:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.060 10:06:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.060 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 ************************************ 00:06:59.061 START TEST app_cmdline 00:06:59.061 ************************************ 00:06:59.061 10:06:01 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:59.061 * Looking for test storage... 00:06:59.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.061 10:06:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.061 --rc genhtml_branch_coverage=1 00:06:59.061 --rc genhtml_function_coverage=1 00:06:59.061 --rc genhtml_legend=1 00:06:59.061 --rc geninfo_all_blocks=1 00:06:59.061 --rc geninfo_unexecuted_blocks=1 00:06:59.061 00:06:59.061 ' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.061 --rc genhtml_branch_coverage=1 00:06:59.061 --rc genhtml_function_coverage=1 00:06:59.061 --rc genhtml_legend=1 00:06:59.061 --rc geninfo_all_blocks=1 00:06:59.061 --rc geninfo_unexecuted_blocks=1 00:06:59.061 00:06:59.061 ' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.061 --rc genhtml_branch_coverage=1 00:06:59.061 --rc genhtml_function_coverage=1 00:06:59.061 --rc genhtml_legend=1 00:06:59.061 --rc geninfo_all_blocks=1 00:06:59.061 --rc geninfo_unexecuted_blocks=1 00:06:59.061 00:06:59.061 ' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.061 --rc genhtml_branch_coverage=1 00:06:59.061 --rc genhtml_function_coverage=1 00:06:59.061 --rc genhtml_legend=1 00:06:59.061 --rc geninfo_all_blocks=1 00:06:59.061 --rc geninfo_unexecuted_blocks=1 00:06:59.061 00:06:59.061 ' 00:06:59.061 10:06:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.061 10:06:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59686 00:06:59.061 10:06:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59686 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59686 ']' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.061 10:06:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.061 10:06:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 [2024-10-17 10:06:02.208693] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:06:59.321 [2024-10-17 10:06:02.208826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59686 ] 00:06:59.321 [2024-10-17 10:06:02.355005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.579 [2024-10-17 10:06:02.485188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.147 10:06:03 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.147 10:06:03 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:00.147 10:06:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:00.423 { 00:07:00.423 "version": "SPDK v25.01-pre git sha1 2a2bf59c2", 00:07:00.423 "fields": { 00:07:00.423 "major": 25, 00:07:00.423 "minor": 1, 00:07:00.423 "patch": 0, 00:07:00.423 "suffix": "-pre", 00:07:00.423 "commit": "2a2bf59c2" 00:07:00.423 } 00:07:00.423 } 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.423 10:06:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:00.423 10:06:03 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.719 request: 00:07:00.719 { 00:07:00.719 "method": "env_dpdk_get_mem_stats", 00:07:00.719 "req_id": 1 00:07:00.719 } 00:07:00.719 Got JSON-RPC error response 00:07:00.719 response: 00:07:00.719 { 00:07:00.719 "code": -32601, 00:07:00.719 "message": "Method not found" 00:07:00.719 } 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.719 10:06:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59686 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59686 ']' 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59686 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59686 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.719 killing process with pid 59686 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59686' 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@969 -- # kill 59686 00:07:00.719 10:06:03 app_cmdline -- common/autotest_common.sh@974 -- # wait 59686 00:07:02.101 00:07:02.101 real 0m3.175s 00:07:02.101 user 0m3.562s 00:07:02.101 sys 0m0.442s 00:07:02.101 10:06:05 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.101 10:06:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.101 ************************************ 00:07:02.101 END TEST app_cmdline 00:07:02.101 ************************************ 00:07:02.362 10:06:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.362 10:06:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.362 10:06:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.362 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.362 ************************************ 00:07:02.362 START TEST version 00:07:02.362 ************************************ 00:07:02.362 10:06:05 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.362 * Looking for test storage... 00:07:02.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.363 10:06:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.363 10:06:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.363 10:06:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.363 10:06:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.363 10:06:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.363 10:06:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.363 10:06:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.363 10:06:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.363 10:06:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.363 10:06:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.363 10:06:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.363 10:06:05 version -- scripts/common.sh@344 -- # case "$op" in 00:07:02.363 10:06:05 version -- scripts/common.sh@345 -- # : 1 00:07:02.363 10:06:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.363 10:06:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.363 10:06:05 version -- scripts/common.sh@365 -- # decimal 1 00:07:02.363 10:06:05 version -- scripts/common.sh@353 -- # local d=1 00:07:02.363 10:06:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.363 10:06:05 version -- scripts/common.sh@355 -- # echo 1 00:07:02.363 10:06:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.363 10:06:05 version -- scripts/common.sh@366 -- # decimal 2 00:07:02.363 10:06:05 version -- scripts/common.sh@353 -- # local d=2 00:07:02.363 10:06:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.363 10:06:05 version -- scripts/common.sh@355 -- # echo 2 00:07:02.363 10:06:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.363 10:06:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.363 10:06:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.363 10:06:05 version -- scripts/common.sh@368 -- # return 0 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.363 --rc genhtml_branch_coverage=1 00:07:02.363 --rc genhtml_function_coverage=1 00:07:02.363 --rc genhtml_legend=1 00:07:02.363 --rc geninfo_all_blocks=1 00:07:02.363 --rc geninfo_unexecuted_blocks=1 00:07:02.363 00:07:02.363 ' 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.363 --rc genhtml_branch_coverage=1 00:07:02.363 --rc genhtml_function_coverage=1 00:07:02.363 --rc genhtml_legend=1 00:07:02.363 --rc geninfo_all_blocks=1 00:07:02.363 --rc geninfo_unexecuted_blocks=1 00:07:02.363 00:07:02.363 ' 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.363 --rc genhtml_branch_coverage=1 00:07:02.363 --rc genhtml_function_coverage=1 00:07:02.363 --rc genhtml_legend=1 00:07:02.363 --rc geninfo_all_blocks=1 00:07:02.363 --rc geninfo_unexecuted_blocks=1 00:07:02.363 00:07:02.363 ' 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.363 --rc genhtml_branch_coverage=1 00:07:02.363 --rc genhtml_function_coverage=1 00:07:02.363 --rc genhtml_legend=1 00:07:02.363 --rc geninfo_all_blocks=1 00:07:02.363 --rc geninfo_unexecuted_blocks=1 00:07:02.363 00:07:02.363 ' 00:07:02.363 10:06:05 version -- app/version.sh@17 -- # get_header_version major 00:07:02.363 10:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # cut -f2 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.363 10:06:05 version -- app/version.sh@17 -- # major=25 00:07:02.363 10:06:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # cut -f2 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.363 10:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.363 10:06:05 version -- app/version.sh@18 -- # minor=1 00:07:02.363 10:06:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.363 10:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # cut -f2 00:07:02.363 10:06:05 version -- app/version.sh@19 -- # patch=0 00:07:02.363 10:06:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.363 10:06:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.363 10:06:05 version -- app/version.sh@14 -- # cut -f2 00:07:02.363 10:06:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.363 10:06:05 version -- app/version.sh@22 -- # version=25.1 00:07:02.363 10:06:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.363 10:06:05 version -- app/version.sh@28 -- # version=25.1rc0 00:07:02.363 10:06:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:02.363 10:06:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.363 10:06:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:02.363 10:06:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:02.363 00:07:02.363 real 0m0.206s 00:07:02.363 user 0m0.138s 00:07:02.363 sys 0m0.097s 00:07:02.363 10:06:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.363 10:06:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.363 ************************************ 00:07:02.363 END TEST version 00:07:02.363 ************************************ 00:07:02.624 10:06:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:02.624 10:06:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:02.624 10:06:05 -- spdk/autotest.sh@194 -- # uname -s 00:07:02.624 10:06:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:02.624 10:06:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.624 10:06:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:02.624 10:06:05 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:02.624 10:06:05 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:02.624 10:06:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.624 10:06:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.624 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.624 ************************************ 00:07:02.624 START TEST blockdev_nvme 00:07:02.624 ************************************ 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:02.624 * Looking for test storage... 00:07:02.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.624 10:06:05 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.624 --rc genhtml_branch_coverage=1 00:07:02.624 --rc genhtml_function_coverage=1 00:07:02.624 --rc genhtml_legend=1 00:07:02.624 --rc geninfo_all_blocks=1 00:07:02.624 --rc geninfo_unexecuted_blocks=1 00:07:02.624 00:07:02.624 ' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.624 --rc genhtml_branch_coverage=1 00:07:02.624 --rc genhtml_function_coverage=1 00:07:02.624 --rc genhtml_legend=1 00:07:02.624 --rc geninfo_all_blocks=1 00:07:02.624 --rc geninfo_unexecuted_blocks=1 00:07:02.624 00:07:02.624 ' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.624 --rc genhtml_branch_coverage=1 00:07:02.624 --rc genhtml_function_coverage=1 00:07:02.624 --rc genhtml_legend=1 00:07:02.624 --rc geninfo_all_blocks=1 00:07:02.624 --rc geninfo_unexecuted_blocks=1 00:07:02.624 00:07:02.624 ' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.624 --rc genhtml_branch_coverage=1 00:07:02.624 --rc genhtml_function_coverage=1 00:07:02.624 --rc genhtml_legend=1 00:07:02.624 --rc geninfo_all_blocks=1 00:07:02.624 --rc geninfo_unexecuted_blocks=1 00:07:02.624 00:07:02.624 ' 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.624 10:06:05 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59858 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59858 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 59858 ']' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.624 10:06:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.624 10:06:05 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:02.624 [2024-10-17 10:06:05.692041] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:02.624 [2024-10-17 10:06:05.692329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:07:02.885 [2024-10-17 10:06:05.841493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.885 [2024-10-17 10:06:05.943624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.459 10:06:06 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.459 10:06:06 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:03.459 10:06:06 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:03.459 10:06:06 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:03.459 10:06:06 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:03.459 10:06:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:03.459 10:06:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.719 10:06:06 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:03.719 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.719 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.981 10:06:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:03.981 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 10:06:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:03.982 10:06:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "48d36b3c-a0f8-4b08-bb56-cce232c55024"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "48d36b3c-a0f8-4b08-bb56-cce232c55024",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7f0aab42-9d72-43a9-93ef-a93ede98ce3f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7f0aab42-9d72-43a9-93ef-a93ede98ce3f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "eef83674-f2c7-4e3c-88f0-8480d174e7bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eef83674-f2c7-4e3c-88f0-8480d174e7bd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "383c0d45-c1a4-41d6-8ae0-bd7f6cd752fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "383c0d45-c1a4-41d6-8ae0-bd7f6cd752fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "da4c9147-35f6-4fcf-a739-371d5c09e82f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "da4c9147-35f6-4fcf-a739-371d5c09e82f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8f561c74-7b47-4cdc-9e2d-ae07642e40f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8f561c74-7b47-4cdc-9e2d-ae07642e40f9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:03.983 10:06:07 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:03.983 10:06:07 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:03.983 10:06:07 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:03.983 10:06:07 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59858 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 59858 ']' 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 59858 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59858 00:07:03.983 killing process with pid 59858 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59858' 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 59858 00:07:03.983 10:06:07 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 59858 00:07:05.923 10:06:08 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.923 10:06:08 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.923 10:06:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:05.923 10:06:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.923 10:06:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.923 ************************************ 00:07:05.923 START TEST bdev_hello_world 00:07:05.923 ************************************ 00:07:05.923 10:06:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.923 [2024-10-17 10:06:08.586318] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:05.923 [2024-10-17 10:06:08.586449] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59942 ] 00:07:05.923 [2024-10-17 10:06:08.737530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.923 [2024-10-17 10:06:08.836999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.489 [2024-10-17 10:06:09.369122] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:06.489 [2024-10-17 10:06:09.369165] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:06.489 [2024-10-17 10:06:09.369186] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:06.489 [2024-10-17 10:06:09.371621] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:06.489 [2024-10-17 10:06:09.372082] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:06.489 [2024-10-17 10:06:09.372132] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:06.489 [2024-10-17 10:06:09.372253] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:06.489 00:07:06.489 [2024-10-17 10:06:09.372270] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:07.057 ************************************ 00:07:07.057 END TEST bdev_hello_world 00:07:07.057 ************************************ 00:07:07.057 00:07:07.057 real 0m1.554s 00:07:07.057 user 0m1.276s 00:07:07.057 sys 0m0.172s 00:07:07.057 10:06:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.057 10:06:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:07.057 10:06:10 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:07.057 10:06:10 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.057 10:06:10 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.057 10:06:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.057 ************************************ 00:07:07.057 START TEST bdev_bounds 00:07:07.057 ************************************ 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59979 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.057 Process bdevio pid: 59979 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59979' 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59979 00:07:07.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 59979 ']' 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.057 10:06:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.317 [2024-10-17 10:06:10.177379] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:07.317 [2024-10-17 10:06:10.177583] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:07:07.317 [2024-10-17 10:06:10.317636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.577 [2024-10-17 10:06:10.421223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.577 [2024-10-17 10:06:10.421295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.577 [2024-10-17 10:06:10.421303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.202 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.202 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:08.202 10:06:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:08.202 I/O targets: 00:07:08.202 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:08.202 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:08.202 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.202 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.202 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.202 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:08.202 00:07:08.202 00:07:08.202 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.202 http://cunit.sourceforge.net/ 00:07:08.202 00:07:08.202 00:07:08.202 Suite: bdevio tests on: Nvme3n1 00:07:08.202 Test: blockdev write read block ...passed 00:07:08.202 Test: blockdev write zeroes read block ...passed 00:07:08.202 Test: blockdev write zeroes read no split ...passed 00:07:08.202 Test: blockdev write zeroes read split ...passed 00:07:08.202 Test: blockdev write zeroes read split partial ...passed 00:07:08.202 Test: blockdev reset ...[2024-10-17 10:06:11.177574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:08.202 passed 00:07:08.202 Test: blockdev write read 8 blocks ...[2024-10-17 10:06:11.180747] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.202 passed 00:07:08.202 Test: blockdev write read size > 128k ...passed 00:07:08.202 Test: blockdev write read invalid size ...passed 00:07:08.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.202 Test: blockdev write read max offset ...passed 00:07:08.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.202 Test: blockdev writev readv 8 blocks ...passed 00:07:08.202 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.202 Test: blockdev writev readv block ...passed 00:07:08.202 Test: blockdev writev readv size > 128k ...passed 00:07:08.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.202 Test: blockdev comparev and writev ...[2024-10-17 10:06:11.186827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c0a000 len:0x1000 00:07:08.202 passed 00:07:08.202 Test: blockdev nvme passthru rw ...passed 00:07:08.202 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.187016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.202 [2024-10-17 10:06:11.187578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.202 [2024-10-17 10:06:11.187653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.202 passed 00:07:08.202 Test: blockdev nvme admin passthru ...passed 00:07:08.202 Test: blockdev copy ...passed 00:07:08.202 Suite: bdevio tests on: Nvme2n3 00:07:08.202 Test: blockdev write read block ...passed 00:07:08.202 Test: blockdev write zeroes read block ...passed 00:07:08.202 Test: blockdev write zeroes read no split ...passed 00:07:08.202 Test: blockdev write zeroes read split ...passed 00:07:08.202 Test: blockdev write zeroes read split partial ...passed 00:07:08.202 Test: blockdev reset ...[2024-10-17 10:06:11.254431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.202 [2024-10-17 10:06:11.257473] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.202 passed 00:07:08.202 Test: blockdev write read 8 blocks ...passed 00:07:08.202 Test: blockdev write read size > 128k ...passed 00:07:08.202 Test: blockdev write read invalid size ...passed 00:07:08.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.202 Test: blockdev write read max offset ...passed 00:07:08.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.202 Test: blockdev writev readv 8 blocks ...passed 00:07:08.202 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.202 Test: blockdev writev readv block ...passed 00:07:08.202 Test: blockdev writev readv size > 128k ...passed 00:07:08.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.202 Test: blockdev comparev and writev ...[2024-10-17 10:06:11.264562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295606000 len:0x1000 00:07:08.202 [2024-10-17 10:06:11.264758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.202 passed 00:07:08.202 Test: blockdev nvme passthru rw ...passed 00:07:08.202 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.265655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.202 passed[2024-10-17 10:06:11.265769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.202 00:07:08.202 Test: blockdev nvme admin passthru ...passed 00:07:08.202 Test: blockdev copy ...passed 00:07:08.202 Suite: bdevio tests on: Nvme2n2 00:07:08.202 Test: blockdev write read block ...passed 00:07:08.202 Test: blockdev write zeroes read block ...passed 00:07:08.202 Test: blockdev write zeroes read no split ...passed 00:07:08.464 Test: blockdev write zeroes read split ...passed 00:07:08.464 Test: blockdev write zeroes read split partial ...passed 00:07:08.464 Test: blockdev reset ...[2024-10-17 10:06:11.330765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.464 [2024-10-17 10:06:11.333751] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.464 passed 00:07:08.464 Test: blockdev write read 8 blocks ...passed 00:07:08.464 Test: blockdev write read size > 128k ...passed 00:07:08.464 Test: blockdev write read invalid size ...passed 00:07:08.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.464 Test: blockdev write read max offset ...passed 00:07:08.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.464 Test: blockdev writev readv 8 blocks ...passed 00:07:08.464 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.464 Test: blockdev writev readv block ...passed 00:07:08.464 Test: blockdev writev readv size > 128k ...passed 00:07:08.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.464 Test: blockdev comparev and writev ...[2024-10-17 10:06:11.340557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d323c000 len:0x1000 00:07:08.464 [2024-10-17 10:06:11.340734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.464 passed 00:07:08.464 Test: blockdev nvme passthru rw ...passed 00:07:08.464 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.341383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.464 passed 00:07:08.464 Test: blockdev nvme admin passthru ...[2024-10-17 10:06:11.341513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.464 passed 00:07:08.464 Test: blockdev copy ...passed 00:07:08.464 Suite: bdevio tests on: Nvme2n1 00:07:08.464 Test: blockdev write read block ...passed 00:07:08.464 Test: blockdev write zeroes read block ...passed 00:07:08.464 Test: blockdev write zeroes read no split ...passed 00:07:08.464 Test: blockdev write zeroes read split ...passed 00:07:08.464 Test: blockdev write zeroes read split partial ...passed 00:07:08.464 Test: blockdev reset ...[2024-10-17 10:06:11.400981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.464 [2024-10-17 10:06:11.404033] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.464 passed 00:07:08.464 Test: blockdev write read 8 blocks ...passed 00:07:08.464 Test: blockdev write read size > 128k ...passed 00:07:08.464 Test: blockdev write read invalid size ...passed 00:07:08.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.464 Test: blockdev write read max offset ...passed 00:07:08.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.464 Test: blockdev writev readv 8 blocks ...passed 00:07:08.464 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.464 Test: blockdev writev readv block ...passed 00:07:08.464 Test: blockdev writev readv size > 128k ...passed 00:07:08.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.464 Test: blockdev comparev and writev ...[2024-10-17 10:06:11.408786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3238000 len:0x1000 00:07:08.464 passed 00:07:08.464 Test: blockdev nvme passthru rw ...passed 00:07:08.464 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.408967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.464 [2024-10-17 10:06:11.409433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.464 [2024-10-17 10:06:11.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.464 passed 00:07:08.464 Test: blockdev nvme admin passthru ...passed 00:07:08.464 Test: blockdev copy ...passed 00:07:08.464 Suite: bdevio tests on: Nvme1n1 00:07:08.464 Test: blockdev write read block ...passed 00:07:08.464 Test: blockdev write zeroes read block ...passed 00:07:08.464 Test: blockdev write zeroes read no split ...passed 00:07:08.464 Test: blockdev write zeroes read split ...passed 00:07:08.464 Test: blockdev write zeroes read split partial ...passed 00:07:08.464 Test: blockdev reset ...[2024-10-17 10:06:11.454628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:08.464 [2024-10-17 10:06:11.457448] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.464 passed 00:07:08.464 Test: blockdev write read 8 blocks ...passed 00:07:08.464 Test: blockdev write read size > 128k ...passed 00:07:08.464 Test: blockdev write read invalid size ...passed 00:07:08.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.464 Test: blockdev write read max offset ...passed 00:07:08.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.464 Test: blockdev writev readv 8 blocks ...passed 00:07:08.464 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.464 Test: blockdev writev readv block ...passed 00:07:08.464 Test: blockdev writev readv size > 128k ...passed 00:07:08.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.464 Test: blockdev comparev and writev ...[2024-10-17 10:06:11.464543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3234000 len:0x1000 00:07:08.465 [2024-10-17 10:06:11.464726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.465 passed 00:07:08.465 Test: blockdev nvme passthru rw ...passed 00:07:08.465 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.465469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.465 [2024-10-17 10:06:11.465575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.465 passed 00:07:08.465 Test: blockdev nvme admin passthru ...passed 00:07:08.465 Test: blockdev copy ...passed 00:07:08.465 Suite: bdevio tests on: Nvme0n1 00:07:08.465 Test: blockdev write read block ...passed 00:07:08.465 Test: blockdev write zeroes read block ...passed 00:07:08.465 Test: blockdev write zeroes read no split ...passed 00:07:08.465 Test: blockdev write zeroes read split ...passed 00:07:08.465 Test: blockdev write zeroes read split partial ...passed 00:07:08.465 Test: blockdev reset ...[2024-10-17 10:06:11.521555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:08.465 [2024-10-17 10:06:11.524475] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.465 passed 00:07:08.465 Test: blockdev write read 8 blocks ...passed 00:07:08.465 Test: blockdev write read size > 128k ...passed 00:07:08.465 Test: blockdev write read invalid size ...passed 00:07:08.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.465 Test: blockdev write read max offset ...passed 00:07:08.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.465 Test: blockdev writev readv 8 blocks ...passed 00:07:08.465 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.465 Test: blockdev writev readv block ...passed 00:07:08.465 Test: blockdev writev readv size > 128k ...passed 00:07:08.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.465 Test: blockdev comparev and writev ...passed 00:07:08.465 Test: blockdev nvme passthru rw ...[2024-10-17 10:06:11.531208] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:08.465 separate metadata which is not supported yet. 00:07:08.465 passed 00:07:08.465 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:06:11.531636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:08.465 passed 00:07:08.465 Test: blockdev nvme admin passthru ...[2024-10-17 10:06:11.531808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:08.465 passed 00:07:08.465 Test: blockdev copy ...passed 00:07:08.465 00:07:08.465 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.465 suites 6 6 n/a 0 0 00:07:08.465 tests 138 138 138 0 0 00:07:08.465 asserts 893 893 893 0 n/a 00:07:08.465 00:07:08.465 Elapsed time = 1.072 seconds 00:07:08.465 0 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59979 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 59979 ']' 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 59979 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59979 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59979' 00:07:08.726 killing process with pid 59979 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 59979 00:07:08.726 10:06:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 59979 00:07:09.298 10:06:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:09.298 00:07:09.298 real 0m2.124s 00:07:09.298 user 0m5.449s 00:07:09.298 sys 0m0.297s 00:07:09.298 10:06:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.298 10:06:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.298 ************************************ 00:07:09.298 END TEST bdev_bounds 00:07:09.298 ************************************ 00:07:09.298 10:06:12 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.298 10:06:12 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.298 10:06:12 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.298 10:06:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.298 ************************************ 00:07:09.298 START TEST bdev_nbd 00:07:09.298 ************************************ 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:09.298 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60033 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60033 /var/tmp/spdk-nbd.sock 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60033 ']' 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.299 10:06:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.299 [2024-10-17 10:06:12.374513] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:09.299 [2024-10-17 10:06:12.374796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.560 [2024-10-17 10:06:12.523108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.560 [2024-10-17 10:06:12.623901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.502 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.503 1+0 records in 00:07:10.503 1+0 records out 00:07:10.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400199 s, 10.2 MB/s 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.503 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.765 1+0 records in 00:07:10.765 1+0 records out 00:07:10.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362009 s, 11.3 MB/s 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.765 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.026 1+0 records in 00:07:11.026 1+0 records out 00:07:11.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395412 s, 10.4 MB/s 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.026 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.027 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.027 10:06:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.027 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.027 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.027 10:06:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.287 1+0 records in 00:07:11.287 1+0 records out 00:07:11.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352131 s, 11.6 MB/s 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.287 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.547 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.548 1+0 records in 00:07:11.548 1+0 records out 00:07:11.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641937 s, 6.4 MB/s 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.548 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.809 1+0 records in 00:07:11.809 1+0 records out 00:07:11.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451606 s, 9.1 MB/s 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd0", 00:07:11.809 "bdev_name": "Nvme0n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd1", 00:07:11.809 "bdev_name": "Nvme1n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd2", 00:07:11.809 "bdev_name": "Nvme2n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd3", 00:07:11.809 "bdev_name": "Nvme2n2" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd4", 00:07:11.809 "bdev_name": "Nvme2n3" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd5", 00:07:11.809 "bdev_name": "Nvme3n1" 00:07:11.809 } 00:07:11.809 ]' 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd0", 00:07:11.809 "bdev_name": "Nvme0n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd1", 00:07:11.809 "bdev_name": "Nvme1n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd2", 00:07:11.809 "bdev_name": "Nvme2n1" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd3", 00:07:11.809 "bdev_name": "Nvme2n2" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd4", 00:07:11.809 "bdev_name": "Nvme2n3" 00:07:11.809 }, 00:07:11.809 { 00:07:11.809 "nbd_device": "/dev/nbd5", 00:07:11.809 "bdev_name": "Nvme3n1" 00:07:11.809 } 00:07:11.809 ]' 00:07:11.809 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.071 10:06:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.071 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.332 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.594 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:12.854 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.855 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.137 10:06:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.137 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.138 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.446 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:13.707 /dev/nbd0 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.707 1+0 records in 00:07:13.707 1+0 records out 00:07:13.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345735 s, 11.8 MB/s 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.707 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:13.967 /dev/nbd1 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.967 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.968 1+0 records in 00:07:13.968 1+0 records out 00:07:13.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241384 s, 17.0 MB/s 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.968 10:06:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:14.229 /dev/nbd10 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.229 1+0 records in 00:07:14.229 1+0 records out 00:07:14.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706265 s, 5.8 MB/s 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.229 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:14.489 /dev/nbd11 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.489 1+0 records in 00:07:14.489 1+0 records out 00:07:14.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413441 s, 9.9 MB/s 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.489 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:14.749 /dev/nbd12 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.749 1+0 records in 00:07:14.749 1+0 records out 00:07:14.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378873 s, 10.8 MB/s 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.749 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:15.011 /dev/nbd13 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.011 1+0 records in 00:07:15.011 1+0 records out 00:07:15.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513967 s, 8.0 MB/s 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.011 10:06:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd0", 00:07:15.274 "bdev_name": "Nvme0n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd1", 00:07:15.274 "bdev_name": "Nvme1n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd10", 00:07:15.274 "bdev_name": "Nvme2n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd11", 00:07:15.274 "bdev_name": "Nvme2n2" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd12", 00:07:15.274 "bdev_name": "Nvme2n3" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd13", 00:07:15.274 "bdev_name": "Nvme3n1" 00:07:15.274 } 00:07:15.274 ]' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd0", 00:07:15.274 "bdev_name": "Nvme0n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd1", 00:07:15.274 "bdev_name": "Nvme1n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd10", 00:07:15.274 "bdev_name": "Nvme2n1" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd11", 00:07:15.274 "bdev_name": "Nvme2n2" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd12", 00:07:15.274 "bdev_name": "Nvme2n3" 00:07:15.274 }, 00:07:15.274 { 00:07:15.274 "nbd_device": "/dev/nbd13", 00:07:15.274 "bdev_name": "Nvme3n1" 00:07:15.274 } 00:07:15.274 ]' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.274 /dev/nbd1 00:07:15.274 /dev/nbd10 00:07:15.274 /dev/nbd11 00:07:15.274 /dev/nbd12 00:07:15.274 /dev/nbd13' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.274 /dev/nbd1 00:07:15.274 /dev/nbd10 00:07:15.274 /dev/nbd11 00:07:15.274 /dev/nbd12 00:07:15.274 /dev/nbd13' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:15.274 256+0 records in 00:07:15.274 256+0 records out 00:07:15.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471271 s, 222 MB/s 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.274 256+0 records in 00:07:15.274 256+0 records out 00:07:15.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117354 s, 8.9 MB/s 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.274 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.536 256+0 records in 00:07:15.536 256+0 records out 00:07:15.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106046 s, 9.9 MB/s 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:15.536 256+0 records in 00:07:15.536 256+0 records out 00:07:15.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0698731 s, 15.0 MB/s 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:15.536 256+0 records in 00:07:15.536 256+0 records out 00:07:15.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129723 s, 8.1 MB/s 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.536 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:15.798 256+0 records in 00:07:15.798 256+0 records out 00:07:15.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0815537 s, 12.9 MB/s 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:15.798 256+0 records in 00:07:15.798 256+0 records out 00:07:15.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647294 s, 16.2 MB/s 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.798 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.059 10:06:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.320 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.582 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.842 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:17.101 10:06:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:17.101 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.102 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:17.361 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:17.623 malloc_lvol_verify 00:07:17.623 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:17.623 d8d5db38-acdb-4037-8535-7d6fcecb6724 00:07:17.885 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:17.885 53b40523-b1e5-42a1-956c-205e44692fa4 00:07:17.885 10:06:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:18.147 /dev/nbd0 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:18.147 mke2fs 1.47.0 (5-Feb-2023) 00:07:18.147 Discarding device blocks: 0/4096 done 00:07:18.147 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:18.147 00:07:18.147 Allocating group tables: 0/1 done 00:07:18.147 Writing inode tables: 0/1 done 00:07:18.147 Creating journal (1024 blocks): done 00:07:18.147 Writing superblocks and filesystem accounting information: 0/1 done 00:07:18.147 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.147 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60033 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60033 ']' 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60033 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60033 00:07:18.409 killing process with pid 60033 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60033' 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60033 00:07:18.409 10:06:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60033 00:07:19.353 ************************************ 00:07:19.353 END TEST bdev_nbd 00:07:19.353 ************************************ 00:07:19.353 10:06:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:19.353 00:07:19.353 real 0m9.968s 00:07:19.353 user 0m14.325s 00:07:19.353 sys 0m3.007s 00:07:19.353 10:06:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.353 10:06:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:19.353 10:06:22 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:19.353 10:06:22 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:19.353 skipping fio tests on NVMe due to multi-ns failures. 00:07:19.353 10:06:22 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:19.353 10:06:22 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:19.353 10:06:22 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:19.353 10:06:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:19.353 10:06:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.353 10:06:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.353 ************************************ 00:07:19.353 START TEST bdev_verify 00:07:19.353 ************************************ 00:07:19.353 10:06:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:19.353 [2024-10-17 10:06:22.403551] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:19.353 [2024-10-17 10:06:22.403676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60411 ] 00:07:19.615 [2024-10-17 10:06:22.554261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.615 [2024-10-17 10:06:22.666506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.615 [2024-10-17 10:06:22.666507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.189 Running I/O for 5 seconds... 00:07:22.612 19840.00 IOPS, 77.50 MiB/s [2024-10-17T10:06:26.670Z] 19232.00 IOPS, 75.12 MiB/s [2024-10-17T10:06:27.609Z] 18688.00 IOPS, 73.00 MiB/s [2024-10-17T10:06:28.548Z] 18128.00 IOPS, 70.81 MiB/s [2024-10-17T10:06:28.548Z] 18227.20 IOPS, 71.20 MiB/s 00:07:25.457 Latency(us) 00:07:25.457 [2024-10-17T10:06:28.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.457 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0xbd0bd 00:07:25.457 Nvme0n1 : 5.08 1460.98 5.71 0.00 0.00 87402.55 17442.66 89128.96 00:07:25.457 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:25.457 Nvme0n1 : 5.07 1552.83 6.07 0.00 0.00 82040.45 13208.02 79046.50 00:07:25.457 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0xa0000 00:07:25.457 Nvme1n1 : 5.08 1460.41 5.70 0.00 0.00 87317.56 19761.62 84692.68 00:07:25.457 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0xa0000 length 0xa0000 00:07:25.457 Nvme1n1 : 5.07 1551.86 6.06 0.00 0.00 81800.14 13409.67 72190.42 00:07:25.457 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0x80000 00:07:25.457 Nvme2n1 : 5.09 1459.94 5.70 0.00 0.00 87196.55 20164.92 83482.78 00:07:25.457 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x80000 length 0x80000 00:07:25.457 Nvme2n1 : 5.09 1560.61 6.10 0.00 0.00 81386.11 9779.99 67754.14 00:07:25.457 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0x80000 00:07:25.457 Nvme2n2 : 5.09 1458.35 5.70 0.00 0.00 87084.88 21878.94 76223.41 00:07:25.457 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x80000 length 0x80000 00:07:25.457 Nvme2n2 : 5.09 1559.44 6.09 0.00 0.00 81242.86 12603.08 70577.23 00:07:25.457 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0x80000 00:07:25.457 Nvme2n3 : 5.09 1457.28 5.69 0.00 0.00 86980.88 20769.87 76626.71 00:07:25.457 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x80000 length 0x80000 00:07:25.457 Nvme2n3 : 5.09 1558.97 6.09 0.00 0.00 81114.61 12351.02 77433.30 00:07:25.457 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x0 length 0x20000 00:07:25.457 Nvme3n1 : 5.10 1456.24 5.69 0.00 0.00 86849.10 17442.66 77030.01 00:07:25.457 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.457 Verification LBA range: start 0x20000 length 0x20000 00:07:25.457 Nvme3n1 : 5.09 1557.82 6.09 0.00 0.00 81074.43 14720.39 79853.10 00:07:25.457 [2024-10-17T10:06:28.548Z] =================================================================================================================== 00:07:25.457 [2024-10-17T10:06:28.548Z] Total : 18094.73 70.68 0.00 0.00 84199.08 9779.99 89128.96 00:07:26.842 00:07:26.842 real 0m7.247s 00:07:26.842 user 0m13.498s 00:07:26.842 sys 0m0.252s 00:07:26.842 10:06:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.842 ************************************ 00:07:26.842 END TEST bdev_verify 00:07:26.842 ************************************ 00:07:26.842 10:06:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.842 10:06:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.842 10:06:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.842 10:06:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.842 10:06:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.842 ************************************ 00:07:26.842 START TEST bdev_verify_big_io 00:07:26.842 ************************************ 00:07:26.842 10:06:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.842 [2024-10-17 10:06:29.716607] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:26.842 [2024-10-17 10:06:29.716741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:07:26.842 [2024-10-17 10:06:29.871132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.104 [2024-10-17 10:06:29.985493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.104 [2024-10-17 10:06:29.985749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.676 Running I/O for 5 seconds... 00:07:32.391 879.00 IOPS, 54.94 MiB/s [2024-10-17T10:06:36.866Z] 2095.00 IOPS, 130.94 MiB/s [2024-10-17T10:06:36.866Z] 2116.00 IOPS, 132.25 MiB/s 00:07:33.776 Latency(us) 00:07:33.776 [2024-10-17T10:06:36.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.776 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0xbd0b 00:07:33.776 Nvme0n1 : 5.91 104.21 6.51 0.00 0.00 1180699.37 14417.92 1180857.90 00:07:33.776 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:33.776 Nvme0n1 : 5.84 115.06 7.19 0.00 0.00 1051403.93 8721.33 1122782.92 00:07:33.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0xa000 00:07:33.776 Nvme1n1 : 5.85 102.63 6.41 0.00 0.00 1164224.52 95985.03 1503496.66 00:07:33.776 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0xa000 length 0xa000 00:07:33.776 Nvme1n1 : 5.91 112.26 7.02 0.00 0.00 1057812.33 99614.72 1651910.50 00:07:33.776 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0x8000 00:07:33.776 Nvme2n1 : 5.91 108.22 6.76 0.00 0.00 1085998.32 62107.96 1174405.12 00:07:33.776 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x8000 length 0x8000 00:07:33.776 Nvme2n1 : 5.85 111.88 6.99 0.00 0.00 1028966.07 100018.02 1690627.15 00:07:33.776 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0x8000 00:07:33.776 Nvme2n2 : 5.92 108.18 6.76 0.00 0.00 1052675.47 62107.96 1200216.22 00:07:33.776 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x8000 length 0x8000 00:07:33.776 Nvme2n2 : 5.93 116.76 7.30 0.00 0.00 955660.58 60091.47 1729343.80 00:07:33.776 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0x8000 00:07:33.776 Nvme2n3 : 5.92 112.95 7.06 0.00 0.00 983606.57 4159.02 1226027.32 00:07:33.776 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x8000 length 0x8000 00:07:33.776 Nvme2n3 : 5.95 131.74 8.23 0.00 0.00 823050.69 21878.94 1219574.55 00:07:33.776 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x0 length 0x2000 00:07:33.776 Nvme3n1 : 5.93 118.72 7.42 0.00 0.00 908347.43 4209.43 1245385.65 00:07:33.776 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.776 Verification LBA range: start 0x2000 length 0x2000 00:07:33.776 Nvme3n1 : 6.02 156.72 9.80 0.00 0.00 673988.23 1008.25 1806777.11 00:07:33.776 [2024-10-17T10:06:36.867Z] =================================================================================================================== 00:07:33.776 [2024-10-17T10:06:36.867Z] Total : 1399.33 87.46 0.00 0.00 980437.69 1008.25 1806777.11 00:07:37.086 00:07:37.086 real 0m9.905s 00:07:37.086 user 0m18.806s 00:07:37.086 sys 0m0.249s 00:07:37.086 10:06:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.086 ************************************ 00:07:37.086 END TEST bdev_verify_big_io 00:07:37.086 ************************************ 00:07:37.086 10:06:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:37.086 10:06:39 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.086 10:06:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:37.086 10:06:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.086 10:06:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.086 ************************************ 00:07:37.086 START TEST bdev_write_zeroes 00:07:37.086 ************************************ 00:07:37.086 10:06:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.086 [2024-10-17 10:06:39.696943] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:37.086 [2024-10-17 10:06:39.697126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:07:37.086 [2024-10-17 10:06:39.863843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.086 [2024-10-17 10:06:39.976167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.713 Running I/O for 1 seconds... 00:07:38.653 49536.00 IOPS, 193.50 MiB/s 00:07:38.653 Latency(us) 00:07:38.653 [2024-10-17T10:06:41.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.653 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme0n1 : 1.03 8215.68 32.09 0.00 0.00 15536.45 5620.97 27424.30 00:07:38.653 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme1n1 : 1.03 8203.06 32.04 0.00 0.00 15543.84 12048.54 25811.10 00:07:38.653 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme2n1 : 1.03 8191.94 32.00 0.00 0.00 15490.55 10889.06 24601.21 00:07:38.653 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme2n2 : 1.03 8180.75 31.96 0.00 0.00 15483.33 10586.58 24197.91 00:07:38.653 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme2n3 : 1.03 8168.44 31.91 0.00 0.00 15479.44 10384.94 24298.73 00:07:38.653 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.653 Nvme3n1 : 1.04 8156.04 31.86 0.00 0.00 15440.34 8065.97 26416.05 00:07:38.653 [2024-10-17T10:06:41.744Z] =================================================================================================================== 00:07:38.653 [2024-10-17T10:06:41.744Z] Total : 49115.90 191.86 0.00 0.00 15495.66 5620.97 27424.30 00:07:39.634 00:07:39.634 real 0m2.806s 00:07:39.634 user 0m2.461s 00:07:39.634 sys 0m0.223s 00:07:39.634 10:06:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.634 ************************************ 00:07:39.634 END TEST bdev_write_zeroes 00:07:39.634 ************************************ 00:07:39.634 10:06:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:39.634 10:06:42 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.634 10:06:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:39.634 10:06:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.634 10:06:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.634 ************************************ 00:07:39.634 START TEST bdev_json_nonenclosed 00:07:39.634 ************************************ 00:07:39.634 10:06:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.634 [2024-10-17 10:06:42.534442] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:39.634 [2024-10-17 10:06:42.534571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:07:39.634 [2024-10-17 10:06:42.684496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.895 [2024-10-17 10:06:42.812192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.895 [2024-10-17 10:06:42.812296] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:39.895 [2024-10-17 10:06:42.812314] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:39.895 [2024-10-17 10:06:42.812341] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.156 ************************************ 00:07:40.156 END TEST bdev_json_nonenclosed 00:07:40.156 00:07:40.156 real 0m0.539s 00:07:40.156 user 0m0.327s 00:07:40.156 sys 0m0.106s 00:07:40.156 10:06:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.156 10:06:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.156 ************************************ 00:07:40.156 10:06:43 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.156 10:06:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.156 10:06:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.156 10:06:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.156 ************************************ 00:07:40.156 START TEST bdev_json_nonarray 00:07:40.156 ************************************ 00:07:40.156 10:06:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.156 [2024-10-17 10:06:43.131083] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:40.156 [2024-10-17 10:06:43.131201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:07:40.416 [2024-10-17 10:06:43.281482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.416 [2024-10-17 10:06:43.403254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.416 [2024-10-17 10:06:43.403366] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.416 [2024-10-17 10:06:43.403384] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.416 [2024-10-17 10:06:43.403394] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.676 00:07:40.676 real 0m0.531s 00:07:40.676 user 0m0.329s 00:07:40.676 sys 0m0.095s 00:07:40.676 10:06:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.676 10:06:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:40.676 ************************************ 00:07:40.676 END TEST bdev_json_nonarray 00:07:40.676 ************************************ 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:40.676 10:06:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:40.676 00:07:40.676 real 0m38.198s 00:07:40.676 user 0m59.655s 00:07:40.676 sys 0m5.126s 00:07:40.676 10:06:43 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.676 ************************************ 00:07:40.676 END TEST blockdev_nvme 00:07:40.676 ************************************ 00:07:40.676 10:06:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.676 10:06:43 -- spdk/autotest.sh@209 -- # uname -s 00:07:40.676 10:06:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:40.676 10:06:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.676 10:06:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.676 10:06:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.676 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:40.676 ************************************ 00:07:40.676 START TEST blockdev_nvme_gpt 00:07:40.676 ************************************ 00:07:40.676 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.935 * Looking for test storage... 00:07:40.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:40.935 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.935 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.935 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.936 10:06:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.936 --rc genhtml_branch_coverage=1 00:07:40.936 --rc genhtml_function_coverage=1 00:07:40.936 --rc genhtml_legend=1 00:07:40.936 --rc geninfo_all_blocks=1 00:07:40.936 --rc geninfo_unexecuted_blocks=1 00:07:40.936 00:07:40.936 ' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.936 --rc genhtml_branch_coverage=1 00:07:40.936 --rc genhtml_function_coverage=1 00:07:40.936 --rc genhtml_legend=1 00:07:40.936 --rc geninfo_all_blocks=1 00:07:40.936 --rc geninfo_unexecuted_blocks=1 00:07:40.936 00:07:40.936 ' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.936 --rc genhtml_branch_coverage=1 00:07:40.936 --rc genhtml_function_coverage=1 00:07:40.936 --rc genhtml_legend=1 00:07:40.936 --rc geninfo_all_blocks=1 00:07:40.936 --rc geninfo_unexecuted_blocks=1 00:07:40.936 00:07:40.936 ' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.936 --rc genhtml_branch_coverage=1 00:07:40.936 --rc genhtml_function_coverage=1 00:07:40.936 --rc genhtml_legend=1 00:07:40.936 --rc geninfo_all_blocks=1 00:07:40.936 --rc geninfo_unexecuted_blocks=1 00:07:40.936 00:07:40.936 ' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60786 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60786 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 60786 ']' 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.936 10:06:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.936 10:06:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:40.936 [2024-10-17 10:06:43.975243] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:40.936 [2024-10-17 10:06:43.975382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 00:07:41.241 [2024-10-17 10:06:44.130328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.241 [2024-10-17 10:06:44.248236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.808 10:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.808 10:06:44 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:41.808 10:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:41.808 10:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:41.808 10:06:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.327 Waiting for block devices as requested 00:07:42.327 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.327 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.587 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.587 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.873 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:47.873 BYT; 00:07:47.873 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:47.873 BYT; 00:07:47.873 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:47.873 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:47.874 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.874 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:47.874 10:06:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:53.196 10:06:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.196 10:06:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.196 10:06:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.196 10:06:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.196 10:06:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.196 10:06:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:53.770 The operation has completed successfully. 00:07:53.770 10:06:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:54.713 The operation has completed successfully. 00:07:54.713 10:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.855 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.855 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.855 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.855 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:55.855 10:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.855 10:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.855 [] 00:07:55.855 10:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.855 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:55.855 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:55.855 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.855 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.116 10:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:56.116 10:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.116 10:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:56.379 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "49db4de0-4d3f-469e-9304-ab6b3c0ec6ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "49db4de0-4d3f-469e-9304-ab6b3c0ec6ca",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0eefd291-ad50-48ee-8ae7-27f4b379aed9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0eefd291-ad50-48ee-8ae7-27f4b379aed9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fe277f2c-63a0-4cc5-ae5a-d2efcd6bc553"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe277f2c-63a0-4cc5-ae5a-d2efcd6bc553",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9c4576d2-8c31-49c4-b40e-f3f32f2ecc16"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c4576d2-8c31-49c4-b40e-f3f32f2ecc16",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "09971045-89bf-4ac7-a467-b1e485acc4a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "09971045-89bf-4ac7-a467-b1e485acc4a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:56.379 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:56.380 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:56.380 10:06:59 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60786 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 60786 ']' 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 60786 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60786 00:07:56.380 killing process with pid 60786 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60786' 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 60786 00:07:56.380 10:06:59 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 60786 00:07:58.369 10:07:01 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.369 10:07:01 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.369 10:07:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:58.369 10:07:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.369 10:07:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.369 ************************************ 00:07:58.369 START TEST bdev_hello_world 00:07:58.369 ************************************ 00:07:58.369 10:07:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.369 [2024-10-17 10:07:01.217195] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:07:58.369 [2024-10-17 10:07:01.217607] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ] 00:07:58.369 [2024-10-17 10:07:01.371027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.629 [2024-10-17 10:07:01.500630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.202 [2024-10-17 10:07:02.095832] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.202 [2024-10-17 10:07:02.095906] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.202 [2024-10-17 10:07:02.095930] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.202 [2024-10-17 10:07:02.098754] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.202 [2024-10-17 10:07:02.099982] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.202 [2024-10-17 10:07:02.100202] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.202 [2024-10-17 10:07:02.100509] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.202 00:07:59.202 [2024-10-17 10:07:02.100532] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:00.144 00:08:00.144 real 0m1.747s 00:08:00.144 user 0m1.415s 00:08:00.144 sys 0m0.219s 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 ************************************ 00:08:00.144 END TEST bdev_hello_world 00:08:00.144 ************************************ 00:08:00.144 10:07:02 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:00.144 10:07:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.144 10:07:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.144 10:07:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 ************************************ 00:08:00.144 START TEST bdev_bounds 00:08:00.144 ************************************ 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:00.144 Process bdevio pid: 61507 00:08:00.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61507 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61507' 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61507 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61507 ']' 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.144 10:07:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 [2024-10-17 10:07:03.035988] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:00.144 [2024-10-17 10:07:03.036188] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61507 ] 00:08:00.144 [2024-10-17 10:07:03.194420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.404 [2024-10-17 10:07:03.342305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.404 [2024-10-17 10:07:03.342996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.404 [2024-10-17 10:07:03.343144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.974 10:07:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.974 10:07:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:00.974 10:07:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:01.235 I/O targets: 00:08:01.235 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:01.235 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:01.235 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:01.235 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.235 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.235 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.235 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:01.235 00:08:01.235 00:08:01.235 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.235 http://cunit.sourceforge.net/ 00:08:01.235 00:08:01.235 00:08:01.235 Suite: bdevio tests on: Nvme3n1 00:08:01.235 Test: blockdev write read block ...passed 00:08:01.235 Test: blockdev write zeroes read block ...passed 00:08:01.235 Test: blockdev write zeroes read no split ...passed 00:08:01.235 Test: blockdev write zeroes read split ...passed 00:08:01.235 Test: blockdev write zeroes read split partial ...passed 00:08:01.235 Test: blockdev reset ...[2024-10-17 10:07:04.156531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:01.235 [2024-10-17 10:07:04.161848] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.235 passed 00:08:01.235 Test: blockdev write read 8 blocks ...passed 00:08:01.235 Test: blockdev write read size > 128k ...passed 00:08:01.235 Test: blockdev write read invalid size ...passed 00:08:01.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.235 Test: blockdev write read max offset ...passed 00:08:01.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.235 Test: blockdev writev readv 8 blocks ...passed 00:08:01.235 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.235 Test: blockdev writev readv block ...passed 00:08:01.235 Test: blockdev writev readv size > 128k ...passed 00:08:01.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.235 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.185629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc04000 len:0x1000 00:08:01.235 [2024-10-17 10:07:04.185954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.235 passed 00:08:01.235 Test: blockdev nvme passthru rw ...passed 00:08:01.235 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:07:04.189687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.235 passed 00:08:01.235 Test: blockdev nvme admin passthru ...[2024-10-17 10:07:04.190013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.235 passed 00:08:01.235 Test: blockdev copy ...passed 00:08:01.235 Suite: bdevio tests on: Nvme2n3 00:08:01.235 Test: blockdev write read block ...passed 00:08:01.235 Test: blockdev write zeroes read block ...passed 00:08:01.235 Test: blockdev write zeroes read no split ...passed 00:08:01.235 Test: blockdev write zeroes read split ...passed 00:08:01.235 Test: blockdev write zeroes read split partial ...passed 00:08:01.235 Test: blockdev reset ...[2024-10-17 10:07:04.285593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:01.235 [2024-10-17 10:07:04.292037] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.235 passed 00:08:01.235 Test: blockdev write read 8 blocks ...passed 00:08:01.235 Test: blockdev write read size > 128k ...passed 00:08:01.235 Test: blockdev write read invalid size ...passed 00:08:01.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.235 Test: blockdev write read max offset ...passed 00:08:01.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.235 Test: blockdev writev readv 8 blocks ...passed 00:08:01.235 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.235 Test: blockdev writev readv block ...passed 00:08:01.235 Test: blockdev writev readv size > 128k ...passed 00:08:01.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.235 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.315670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:01.235 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2afc02000 len:0x1000 00:08:01.235 [2024-10-17 10:07:04.316014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.235 passed 00:08:01.235 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:07:04.318674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.235 passed 00:08:01.235 Test: blockdev nvme admin passthru ...[2024-10-17 10:07:04.318814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.494 passed 00:08:01.494 Test: blockdev copy ...passed 00:08:01.494 Suite: bdevio tests on: Nvme2n2 00:08:01.494 Test: blockdev write read block ...passed 00:08:01.494 Test: blockdev write zeroes read block ...passed 00:08:01.494 Test: blockdev write zeroes read no split ...passed 00:08:01.494 Test: blockdev write zeroes read split ...passed 00:08:01.494 Test: blockdev write zeroes read split partial ...passed 00:08:01.494 Test: blockdev reset ...[2024-10-17 10:07:04.413470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:01.494 [2024-10-17 10:07:04.417134] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.494 passed 00:08:01.494 Test: blockdev write read 8 blocks ...passed 00:08:01.494 Test: blockdev write read size > 128k ...passed 00:08:01.494 Test: blockdev write read invalid size ...passed 00:08:01.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.494 Test: blockdev write read max offset ...passed 00:08:01.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.494 Test: blockdev writev readv 8 blocks ...passed 00:08:01.494 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.494 Test: blockdev writev readv block ...passed 00:08:01.494 Test: blockdev writev readv size > 128k ...passed 00:08:01.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.494 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.435278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c38000 len:0x1000 00:08:01.494 [2024-10-17 10:07:04.435576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.494 passed 00:08:01.494 Test: blockdev nvme passthru rw ...passed 00:08:01.494 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:07:04.438568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.494 passed 00:08:01.494 Test: blockdev nvme admin passthru ...[2024-10-17 10:07:04.438810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.494 passed 00:08:01.494 Test: blockdev copy ...passed 00:08:01.494 Suite: bdevio tests on: Nvme2n1 00:08:01.494 Test: blockdev write read block ...passed 00:08:01.494 Test: blockdev write zeroes read block ...passed 00:08:01.494 Test: blockdev write zeroes read no split ...passed 00:08:01.494 Test: blockdev write zeroes read split ...passed 00:08:01.494 Test: blockdev write zeroes read split partial ...passed 00:08:01.494 Test: blockdev reset ...[2024-10-17 10:07:04.531371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:01.494 [2024-10-17 10:07:04.538194] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.494 passed 00:08:01.494 Test: blockdev write read 8 blocks ...passed 00:08:01.494 Test: blockdev write read size > 128k ...passed 00:08:01.494 Test: blockdev write read invalid size ...passed 00:08:01.494 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.494 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.494 Test: blockdev write read max offset ...passed 00:08:01.494 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.494 Test: blockdev writev readv 8 blocks ...passed 00:08:01.494 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.494 Test: blockdev writev readv block ...passed 00:08:01.494 Test: blockdev writev readv size > 128k ...passed 00:08:01.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.494 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.562078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c34000 len:0x1000 00:08:01.494 [2024-10-17 10:07:04.562420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.494 passed 00:08:01.494 Test: blockdev nvme passthru rw ...passed 00:08:01.494 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:07:04.566315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.494 passed 00:08:01.495 Test: blockdev nvme admin passthru ...[2024-10-17 10:07:04.566786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.495 passed 00:08:01.495 Test: blockdev copy ...passed 00:08:01.495 Suite: bdevio tests on: Nvme1n1p2 00:08:01.495 Test: blockdev write read block ...passed 00:08:01.756 Test: blockdev write zeroes read block ...passed 00:08:01.756 Test: blockdev write zeroes read no split ...passed 00:08:01.756 Test: blockdev write zeroes read split ...passed 00:08:01.756 Test: blockdev write zeroes read split partial ...passed 00:08:01.756 Test: blockdev reset ...[2024-10-17 10:07:04.671562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:01.756 [2024-10-17 10:07:04.675001] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.756 passed 00:08:01.756 Test: blockdev write read 8 blocks ...passed 00:08:01.756 Test: blockdev write read size > 128k ...passed 00:08:01.756 Test: blockdev write read invalid size ...passed 00:08:01.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.756 Test: blockdev write read max offset ...passed 00:08:01.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.756 Test: blockdev writev readv 8 blocks ...passed 00:08:01.756 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.756 Test: blockdev writev readv block ...passed 00:08:01.756 Test: blockdev writev readv size > 128k ...passed 00:08:01.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.756 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.702043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7c30000 len:0x1000 00:08:01.756 passed 00:08:01.756 Test: blockdev nvme passthru rw ...passed 00:08:01.756 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.756 Test: blockdev nvme admin passthru ...passed 00:08:01.756 Test: blockdev copy ...[2024-10-17 10:07:04.702387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.756 passed 00:08:01.756 Suite: bdevio tests on: Nvme1n1p1 00:08:01.756 Test: blockdev write read block ...passed 00:08:01.756 Test: blockdev write zeroes read block ...passed 00:08:01.756 Test: blockdev write zeroes read no split ...passed 00:08:01.756 Test: blockdev write zeroes read split ...passed 00:08:01.756 Test: blockdev write zeroes read split partial ...passed 00:08:01.756 Test: blockdev reset ...[2024-10-17 10:07:04.767487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:01.756 [2024-10-17 10:07:04.773412] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.756 passed 00:08:01.756 Test: blockdev write read 8 blocks ...passed 00:08:01.756 Test: blockdev write read size > 128k ...passed 00:08:01.756 Test: blockdev write read invalid size ...passed 00:08:01.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.756 Test: blockdev write read max offset ...passed 00:08:01.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.756 Test: blockdev writev readv 8 blocks ...passed 00:08:01.756 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.756 Test: blockdev writev readv block ...passed 00:08:01.756 Test: blockdev writev readv size > 128k ...passed 00:08:01.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.756 Test: blockdev comparev and writev ...[2024-10-17 10:07:04.797348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b060e000 len:0x1000 00:08:01.756 [2024-10-17 10:07:04.797725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.756 passed 00:08:01.756 Test: blockdev nvme passthru rw ...passed 00:08:01.756 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.756 Test: blockdev nvme admin passthru ...passed 00:08:01.756 Test: blockdev copy ...passed 00:08:01.756 Suite: bdevio tests on: Nvme0n1 00:08:01.756 Test: blockdev write read block ...passed 00:08:01.756 Test: blockdev write zeroes read block ...passed 00:08:01.756 Test: blockdev write zeroes read no split ...passed 00:08:01.756 Test: blockdev write zeroes read split ...passed 00:08:02.017 Test: blockdev write zeroes read split partial ...passed 00:08:02.017 Test: blockdev reset ...[2024-10-17 10:07:04.868018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:02.017 [2024-10-17 10:07:04.871958] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.017 passed 00:08:02.017 Test: blockdev write read 8 blocks ...passed 00:08:02.017 Test: blockdev write read size > 128k ...passed 00:08:02.017 Test: blockdev write read invalid size ...passed 00:08:02.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.017 Test: blockdev write read max offset ...passed 00:08:02.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.017 Test: blockdev writev readv 8 blocks ...passed 00:08:02.017 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.017 Test: blockdev writev readv block ...passed 00:08:02.017 Test: blockdev writev readv size > 128k ...passed 00:08:02.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.017 Test: blockdev comparev and writev ...passed 00:08:02.017 Test: blockdev nvme passthru rw ...[2024-10-17 10:07:04.889803] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:02.017 separate metadata which is not supported yet. 00:08:02.017 passed 00:08:02.017 Test: blockdev nvme passthru vendor specific ...[2024-10-17 10:07:04.891336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:02.017 passed 00:08:02.017 Test: blockdev nvme admin passthru ...[2024-10-17 10:07:04.891465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:02.017 passed 00:08:02.017 Test: blockdev copy ...passed 00:08:02.017 00:08:02.017 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.017 suites 7 7 n/a 0 0 00:08:02.017 tests 161 161 161 0 0 00:08:02.017 asserts 1025 1025 1025 0 n/a 00:08:02.017 00:08:02.017 Elapsed time = 1.971 seconds 00:08:02.017 0 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61507 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61507 ']' 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61507 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61507 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.017 killing process with pid 61507 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61507' 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61507 00:08:02.017 10:07:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61507 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:02.960 00:08:02.960 real 0m2.863s 00:08:02.960 user 0m7.069s 00:08:02.960 sys 0m0.415s 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:02.960 ************************************ 00:08:02.960 END TEST bdev_bounds 00:08:02.960 ************************************ 00:08:02.960 10:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.960 10:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.960 10:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.960 10:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.960 ************************************ 00:08:02.960 START TEST bdev_nbd 00:08:02.960 ************************************ 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:02.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61567 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61567 /var/tmp/spdk-nbd.sock 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61567 ']' 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.960 10:07:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:02.960 [2024-10-17 10:07:05.981773] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:02.960 [2024-10-17 10:07:05.982678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.221 [2024-10-17 10:07:06.136420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.221 [2024-10-17 10:07:06.278496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.164 10:07:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.164 1+0 records in 00:08:04.164 1+0 records out 00:08:04.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824328 s, 5.0 MB/s 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.164 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.424 1+0 records in 00:08:04.424 1+0 records out 00:08:04.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012713 s, 3.2 MB/s 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.424 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.685 1+0 records in 00:08:04.685 1+0 records out 00:08:04.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833352 s, 4.9 MB/s 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.685 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.686 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.947 1+0 records in 00:08:04.947 1+0 records out 00:08:04.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080468 s, 5.1 MB/s 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.947 10:07:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.519 1+0 records in 00:08:05.519 1+0 records out 00:08:05.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815608 s, 5.0 MB/s 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.519 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:05.781 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.781 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.781 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.782 1+0 records in 00:08:05.782 1+0 records out 00:08:05.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846524 s, 4.8 MB/s 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:05.782 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.044 1+0 records in 00:08:06.044 1+0 records out 00:08:06.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00176871 s, 2.3 MB/s 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:06.044 10:07:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.044 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd0", 00:08:06.044 "bdev_name": "Nvme0n1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd1", 00:08:06.044 "bdev_name": "Nvme1n1p1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd2", 00:08:06.044 "bdev_name": "Nvme1n1p2" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd3", 00:08:06.044 "bdev_name": "Nvme2n1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd4", 00:08:06.044 "bdev_name": "Nvme2n2" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd5", 00:08:06.044 "bdev_name": "Nvme2n3" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd6", 00:08:06.044 "bdev_name": "Nvme3n1" 00:08:06.044 } 00:08:06.044 ]' 00:08:06.044 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:06.044 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd0", 00:08:06.044 "bdev_name": "Nvme0n1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd1", 00:08:06.044 "bdev_name": "Nvme1n1p1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd2", 00:08:06.044 "bdev_name": "Nvme1n1p2" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd3", 00:08:06.044 "bdev_name": "Nvme2n1" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd4", 00:08:06.044 "bdev_name": "Nvme2n2" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd5", 00:08:06.044 "bdev_name": "Nvme2n3" 00:08:06.044 }, 00:08:06.044 { 00:08:06.044 "nbd_device": "/dev/nbd6", 00:08:06.044 "bdev_name": "Nvme3n1" 00:08:06.044 } 00:08:06.044 ]' 00:08:06.044 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.311 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.574 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.835 10:07:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.095 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.095 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.095 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.096 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.357 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.618 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.880 10:07:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.143 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:08.403 /dev/nbd0 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.403 1+0 records in 00:08:08.403 1+0 records out 00:08:08.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565657 s, 7.2 MB/s 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.403 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:08.664 /dev/nbd1 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.664 1+0 records in 00:08:08.664 1+0 records out 00:08:08.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763403 s, 5.4 MB/s 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.664 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:08.926 /dev/nbd10 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.926 1+0 records in 00:08:08.926 1+0 records out 00:08:08.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808324 s, 5.1 MB/s 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.926 10:07:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:09.187 /dev/nbd11 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.187 1+0 records in 00:08:09.187 1+0 records out 00:08:09.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079149 s, 5.2 MB/s 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.187 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:09.448 /dev/nbd12 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.448 1+0 records in 00:08:09.448 1+0 records out 00:08:09.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789281 s, 5.2 MB/s 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.448 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:09.709 /dev/nbd13 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.709 1+0 records in 00:08:09.709 1+0 records out 00:08:09.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171611 s, 2.4 MB/s 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.709 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:09.970 /dev/nbd14 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.970 1+0 records in 00:08:09.970 1+0 records out 00:08:09.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940535 s, 4.4 MB/s 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.970 10:07:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.970 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.231 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd0", 00:08:10.231 "bdev_name": "Nvme0n1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd1", 00:08:10.231 "bdev_name": "Nvme1n1p1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd10", 00:08:10.231 "bdev_name": "Nvme1n1p2" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd11", 00:08:10.231 "bdev_name": "Nvme2n1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd12", 00:08:10.231 "bdev_name": "Nvme2n2" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd13", 00:08:10.231 "bdev_name": "Nvme2n3" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd14", 00:08:10.231 "bdev_name": "Nvme3n1" 00:08:10.231 } 00:08:10.231 ]' 00:08:10.231 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd0", 00:08:10.231 "bdev_name": "Nvme0n1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd1", 00:08:10.231 "bdev_name": "Nvme1n1p1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd10", 00:08:10.231 "bdev_name": "Nvme1n1p2" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd11", 00:08:10.231 "bdev_name": "Nvme2n1" 00:08:10.231 }, 00:08:10.231 { 00:08:10.231 "nbd_device": "/dev/nbd12", 00:08:10.231 "bdev_name": "Nvme2n2" 00:08:10.232 }, 00:08:10.232 { 00:08:10.232 "nbd_device": "/dev/nbd13", 00:08:10.232 "bdev_name": "Nvme2n3" 00:08:10.232 }, 00:08:10.232 { 00:08:10.232 "nbd_device": "/dev/nbd14", 00:08:10.232 "bdev_name": "Nvme3n1" 00:08:10.232 } 00:08:10.232 ]' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.232 /dev/nbd1 00:08:10.232 /dev/nbd10 00:08:10.232 /dev/nbd11 00:08:10.232 /dev/nbd12 00:08:10.232 /dev/nbd13 00:08:10.232 /dev/nbd14' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.232 /dev/nbd1 00:08:10.232 /dev/nbd10 00:08:10.232 /dev/nbd11 00:08:10.232 /dev/nbd12 00:08:10.232 /dev/nbd13 00:08:10.232 /dev/nbd14' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:10.232 256+0 records in 00:08:10.232 256+0 records out 00:08:10.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066898 s, 157 MB/s 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.232 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.492 256+0 records in 00:08:10.492 256+0 records out 00:08:10.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266625 s, 3.9 MB/s 00:08:10.492 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.492 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.754 256+0 records in 00:08:10.754 256+0 records out 00:08:10.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266711 s, 3.9 MB/s 00:08:10.754 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.754 10:07:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:11.016 256+0 records in 00:08:11.016 256+0 records out 00:08:11.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259734 s, 4.0 MB/s 00:08:11.016 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.016 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:11.590 256+0 records in 00:08:11.590 256+0 records out 00:08:11.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266397 s, 3.9 MB/s 00:08:11.590 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.590 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:11.590 256+0 records in 00:08:11.590 256+0 records out 00:08:11.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.262444 s, 4.0 MB/s 00:08:11.590 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.590 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:11.853 256+0 records in 00:08:11.853 256+0 records out 00:08:11.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231211 s, 4.5 MB/s 00:08:11.853 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.853 10:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:12.115 256+0 records in 00:08:12.115 256+0 records out 00:08:12.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215721 s, 4.9 MB/s 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.115 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.377 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.639 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.901 10:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.162 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.477 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.739 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.001 10:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:14.001 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.263 malloc_lvol_verify 00:08:14.263 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.525 8c0d1c0a-eda5-4599-bd0f-f4e9dd6b9453 00:08:14.525 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.786 890d2ca8-f148-4818-a6f2-ee50bfb7e42a 00:08:14.786 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:15.048 /dev/nbd0 00:08:15.048 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:15.048 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:15.048 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:15.048 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:15.048 10:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:15.048 mke2fs 1.47.0 (5-Feb-2023) 00:08:15.048 Discarding device blocks: 0/4096 done 00:08:15.048 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:15.048 00:08:15.048 Allocating group tables: 0/1 done 00:08:15.048 Writing inode tables: 0/1 done 00:08:15.048 Creating journal (1024 blocks): done 00:08:15.048 Writing superblocks and filesystem accounting information: 0/1 done 00:08:15.048 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.048 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61567 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61567 ']' 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61567 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61567 00:08:15.309 killing process with pid 61567 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61567' 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61567 00:08:15.309 10:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61567 00:08:16.252 ************************************ 00:08:16.252 END TEST bdev_nbd 00:08:16.252 ************************************ 00:08:16.252 10:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:16.252 00:08:16.252 real 0m13.248s 00:08:16.252 user 0m17.892s 00:08:16.252 sys 0m4.509s 00:08:16.252 10:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.252 10:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:16.252 skipping fio tests on NVMe due to multi-ns failures. 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:16.252 10:07:19 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.252 10:07:19 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:16.252 10:07:19 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.252 10:07:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.252 ************************************ 00:08:16.252 START TEST bdev_verify 00:08:16.252 ************************************ 00:08:16.252 10:07:19 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.252 [2024-10-17 10:07:19.278570] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:16.252 [2024-10-17 10:07:19.278722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62006 ] 00:08:16.514 [2024-10-17 10:07:19.434020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.514 [2024-10-17 10:07:19.556564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.514 [2024-10-17 10:07:19.556706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.457 Running I/O for 5 seconds... 00:08:19.788 17472.00 IOPS, 68.25 MiB/s [2024-10-17T10:07:23.523Z] 16928.00 IOPS, 66.12 MiB/s [2024-10-17T10:07:24.467Z] 17152.00 IOPS, 67.00 MiB/s [2024-10-17T10:07:25.411Z] 17184.00 IOPS, 67.12 MiB/s [2024-10-17T10:07:25.411Z] 17331.20 IOPS, 67.70 MiB/s 00:08:22.320 Latency(us) 00:08:22.320 [2024-10-17T10:07:25.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0x0 length 0xbd0bd 00:08:22.320 Nvme0n1 : 5.08 1247.32 4.87 0.00 0.00 102055.39 13611.32 83482.78 00:08:22.320 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:22.320 Nvme0n1 : 5.07 1187.74 4.64 0.00 0.00 107205.81 26012.75 98808.12 00:08:22.320 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0x0 length 0x4ff80 00:08:22.320 Nvme1n1p1 : 5.08 1246.58 4.87 0.00 0.00 101913.17 13308.85 78239.90 00:08:22.320 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:22.320 Nvme1n1p1 : 5.10 1193.30 4.66 0.00 0.00 106610.78 10536.17 91548.75 00:08:22.320 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0x0 length 0x4ff7f 00:08:22.320 Nvme1n1p2 : 5.11 1252.38 4.89 0.00 0.00 101495.67 21979.77 76223.41 00:08:22.320 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.320 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:22.320 Nvme1n1p2 : 5.10 1192.19 4.66 0.00 0.00 106408.87 13308.85 82272.89 00:08:22.321 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x0 length 0x80000 00:08:22.321 Nvme2n1 : 5.11 1251.94 4.89 0.00 0.00 101365.67 18955.03 74610.22 00:08:22.321 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x80000 length 0x80000 00:08:22.321 Nvme2n1 : 5.10 1191.84 4.66 0.00 0.00 106248.17 13409.67 80659.69 00:08:22.321 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x0 length 0x80000 00:08:22.321 Nvme2n2 : 5.12 1251.16 4.89 0.00 0.00 101274.15 19559.98 75416.81 00:08:22.321 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x80000 length 0x80000 00:08:22.321 Nvme2n2 : 5.11 1201.23 4.69 0.00 0.00 105468.51 10384.94 82272.89 00:08:22.321 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x0 length 0x80000 00:08:22.321 Nvme2n3 : 5.12 1250.45 4.88 0.00 0.00 101149.31 20366.57 77836.60 00:08:22.321 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x80000 length 0x80000 00:08:22.321 Nvme2n3 : 5.12 1200.55 4.69 0.00 0.00 105307.78 11846.89 83079.48 00:08:22.321 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x0 length 0x20000 00:08:22.321 Nvme3n1 : 5.12 1249.74 4.88 0.00 0.00 101007.56 16333.59 79449.80 00:08:22.321 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.321 Verification LBA range: start 0x20000 length 0x20000 00:08:22.321 Nvme3n1 : 5.12 1199.86 4.69 0.00 0.00 105184.11 13208.02 85499.27 00:08:22.321 [2024-10-17T10:07:25.412Z] =================================================================================================================== 00:08:22.321 [2024-10-17T10:07:25.412Z] Total : 17116.26 66.86 0.00 0.00 103708.51 10384.94 98808.12 00:08:24.236 00:08:24.236 real 0m7.719s 00:08:24.236 user 0m14.344s 00:08:24.236 sys 0m0.289s 00:08:24.236 10:07:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.236 ************************************ 00:08:24.236 END TEST bdev_verify 00:08:24.236 ************************************ 00:08:24.236 10:07:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 10:07:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.236 10:07:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:24.236 10:07:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.236 10:07:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.236 ************************************ 00:08:24.236 START TEST bdev_verify_big_io 00:08:24.236 ************************************ 00:08:24.236 10:07:27 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.236 [2024-10-17 10:07:27.089353] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:24.236 [2024-10-17 10:07:27.089511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62104 ] 00:08:24.236 [2024-10-17 10:07:27.246066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.498 [2024-10-17 10:07:27.389352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.498 [2024-10-17 10:07:27.389533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.073 Running I/O for 5 seconds... 00:08:28.377 0.00 IOPS, 0.00 MiB/s [2024-10-17T10:07:34.019Z] 982.50 IOPS, 61.41 MiB/s [2024-10-17T10:07:34.281Z] 1749.00 IOPS, 109.31 MiB/s [2024-10-17T10:07:34.541Z] 2346.00 IOPS, 146.62 MiB/s 00:08:31.450 Latency(us) 00:08:31.450 [2024-10-17T10:07:34.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.450 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0xbd0b 00:08:31.451 Nvme0n1 : 5.65 113.25 7.08 0.00 0.00 1084710.28 31860.58 1213121.77 00:08:31.451 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:31.451 Nvme0n1 : 5.85 109.13 6.82 0.00 0.00 1107947.48 40934.79 1200216.22 00:08:31.451 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x4ff8 00:08:31.451 Nvme1n1p1 : 5.81 114.76 7.17 0.00 0.00 1034608.39 102034.51 1051802.39 00:08:31.451 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:31.451 Nvme1n1p1 : 5.73 111.70 6.98 0.00 0.00 1069320.03 112116.97 1032444.06 00:08:31.451 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x4ff7 00:08:31.451 Nvme1n1p2 : 5.89 119.59 7.47 0.00 0.00 973782.75 69770.63 851766.35 00:08:31.451 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:31.451 Nvme1n1p2 : 5.85 113.07 7.07 0.00 0.00 1020549.80 121796.14 890483.00 00:08:31.451 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x8000 00:08:31.451 Nvme2n1 : 6.00 124.35 7.77 0.00 0.00 911272.34 71787.13 877577.45 00:08:31.451 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x8000 length 0x8000 00:08:31.451 Nvme2n1 : 5.93 118.76 7.42 0.00 0.00 953689.44 69367.34 909841.33 00:08:31.451 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x8000 00:08:31.451 Nvme2n2 : 6.00 122.99 7.69 0.00 0.00 889130.48 71787.13 896935.78 00:08:31.451 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x8000 length 0x8000 00:08:31.451 Nvme2n2 : 6.00 123.63 7.73 0.00 0.00 890160.42 33070.47 929199.66 00:08:31.451 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x8000 00:08:31.451 Nvme2n3 : 6.03 130.93 8.18 0.00 0.00 817068.90 27625.94 916294.10 00:08:31.451 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x8000 length 0x8000 00:08:31.451 Nvme2n3 : 6.05 127.34 7.96 0.00 0.00 836115.62 37506.76 1103424.59 00:08:31.451 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x0 length 0x2000 00:08:31.451 Nvme3n1 : 6.11 145.52 9.09 0.00 0.00 716500.50 1625.80 1768060.46 00:08:31.451 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.451 Verification LBA range: start 0x2000 length 0x2000 00:08:31.451 Nvme3n1 : 6.09 139.33 8.71 0.00 0.00 749424.87 3377.62 1832588.21 00:08:31.451 [2024-10-17T10:07:34.542Z] =================================================================================================================== 00:08:31.451 [2024-10-17T10:07:34.542Z] Total : 1714.36 107.15 0.00 0.00 920236.63 1625.80 1832588.21 00:08:33.362 00:08:33.362 real 0m9.095s 00:08:33.362 user 0m16.997s 00:08:33.362 sys 0m0.348s 00:08:33.362 10:07:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.362 ************************************ 00:08:33.362 END TEST bdev_verify_big_io 00:08:33.362 ************************************ 00:08:33.362 10:07:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:33.362 10:07:36 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.362 10:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:33.362 10:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.362 10:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.362 ************************************ 00:08:33.362 START TEST bdev_write_zeroes 00:08:33.362 ************************************ 00:08:33.362 10:07:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.362 [2024-10-17 10:07:36.255523] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:33.362 [2024-10-17 10:07:36.255686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62219 ] 00:08:33.362 [2024-10-17 10:07:36.408980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.622 [2024-10-17 10:07:36.556453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.195 Running I/O for 1 seconds... 00:08:35.236 42048.00 IOPS, 164.25 MiB/s 00:08:35.236 Latency(us) 00:08:35.236 [2024-10-17T10:07:38.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.236 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme0n1 : 1.03 6051.24 23.64 0.00 0.00 21089.87 7057.72 36296.86 00:08:35.236 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme1n1p1 : 1.03 6043.16 23.61 0.00 0.00 21092.06 13208.02 28835.84 00:08:35.236 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme1n1p2 : 1.03 6034.92 23.57 0.00 0.00 20974.00 8065.97 28029.24 00:08:35.236 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme2n1 : 1.03 6027.49 23.54 0.00 0.00 20963.83 7612.26 27222.65 00:08:35.236 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme2n2 : 1.03 6020.45 23.52 0.00 0.00 20946.94 7158.55 27827.59 00:08:35.236 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme2n3 : 1.03 6013.28 23.49 0.00 0.00 20939.65 7007.31 28432.54 00:08:35.236 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.236 Nvme3n1 : 1.03 5944.44 23.22 0.00 0.00 21156.98 12149.37 29440.79 00:08:35.236 [2024-10-17T10:07:38.327Z] =================================================================================================================== 00:08:35.236 [2024-10-17T10:07:38.327Z] Total : 42134.98 164.59 0.00 0.00 21023.14 7007.31 36296.86 00:08:36.178 00:08:36.178 real 0m2.910s 00:08:36.178 user 0m2.523s 00:08:36.178 sys 0m0.259s 00:08:36.178 10:07:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.178 ************************************ 00:08:36.178 END TEST bdev_write_zeroes 00:08:36.178 ************************************ 00:08:36.178 10:07:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:36.178 10:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.178 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:36.178 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.178 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.178 ************************************ 00:08:36.178 START TEST bdev_json_nonenclosed 00:08:36.178 ************************************ 00:08:36.178 10:07:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.178 [2024-10-17 10:07:39.248369] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:36.178 [2024-10-17 10:07:39.248536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62272 ] 00:08:36.439 [2024-10-17 10:07:39.403321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.699 [2024-10-17 10:07:39.543374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.699 [2024-10-17 10:07:39.543486] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:36.699 [2024-10-17 10:07:39.543506] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:36.699 [2024-10-17 10:07:39.543517] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.699 00:08:36.699 real 0m0.584s 00:08:36.699 user 0m0.353s 00:08:36.699 sys 0m0.124s 00:08:36.699 10:07:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.699 ************************************ 00:08:36.699 END TEST bdev_json_nonenclosed 00:08:36.699 ************************************ 00:08:36.699 10:07:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:36.960 10:07:39 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.960 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:36.960 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.960 10:07:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.960 ************************************ 00:08:36.960 START TEST bdev_json_nonarray 00:08:36.960 ************************************ 00:08:36.960 10:07:39 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.960 [2024-10-17 10:07:39.893893] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:36.960 [2024-10-17 10:07:39.894090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:08:37.220 [2024-10-17 10:07:40.055485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.220 [2024-10-17 10:07:40.197911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.220 [2024-10-17 10:07:40.198059] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:37.220 [2024-10-17 10:07:40.198080] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:37.220 [2024-10-17 10:07:40.198091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.480 00:08:37.480 real 0m0.586s 00:08:37.480 user 0m0.357s 00:08:37.480 sys 0m0.121s 00:08:37.480 10:07:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.480 10:07:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:37.480 ************************************ 00:08:37.480 END TEST bdev_json_nonarray 00:08:37.480 ************************************ 00:08:37.480 10:07:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:37.480 10:07:40 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:37.480 10:07:40 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:37.480 10:07:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.480 10:07:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.480 10:07:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.480 ************************************ 00:08:37.481 START TEST bdev_gpt_uuid 00:08:37.481 ************************************ 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62329 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62329 00:08:37.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62329 ']' 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.481 10:07:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.741 [2024-10-17 10:07:40.641066] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:37.741 [2024-10-17 10:07:40.641636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62329 ] 00:08:37.741 [2024-10-17 10:07:40.797377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.000 [2024-10-17 10:07:40.941086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.941 10:07:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.941 10:07:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:38.941 10:07:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.941 10:07:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.941 10:07:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.941 Some configs were skipped because the RPC state that can call them passed over. 00:08:38.941 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.941 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:38.941 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.941 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:39.202 { 00:08:39.202 "name": "Nvme1n1p1", 00:08:39.202 "aliases": [ 00:08:39.202 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:39.202 ], 00:08:39.202 "product_name": "GPT Disk", 00:08:39.202 "block_size": 4096, 00:08:39.202 "num_blocks": 655104, 00:08:39.202 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:39.202 "assigned_rate_limits": { 00:08:39.202 "rw_ios_per_sec": 0, 00:08:39.202 "rw_mbytes_per_sec": 0, 00:08:39.202 "r_mbytes_per_sec": 0, 00:08:39.202 "w_mbytes_per_sec": 0 00:08:39.202 }, 00:08:39.202 "claimed": false, 00:08:39.202 "zoned": false, 00:08:39.202 "supported_io_types": { 00:08:39.202 "read": true, 00:08:39.202 "write": true, 00:08:39.202 "unmap": true, 00:08:39.202 "flush": true, 00:08:39.202 "reset": true, 00:08:39.202 "nvme_admin": false, 00:08:39.202 "nvme_io": false, 00:08:39.202 "nvme_io_md": false, 00:08:39.202 "write_zeroes": true, 00:08:39.202 "zcopy": false, 00:08:39.202 "get_zone_info": false, 00:08:39.202 "zone_management": false, 00:08:39.202 "zone_append": false, 00:08:39.202 "compare": true, 00:08:39.202 "compare_and_write": false, 00:08:39.202 "abort": true, 00:08:39.202 "seek_hole": false, 00:08:39.202 "seek_data": false, 00:08:39.202 "copy": true, 00:08:39.202 "nvme_iov_md": false 00:08:39.202 }, 00:08:39.202 "driver_specific": { 00:08:39.202 "gpt": { 00:08:39.202 "base_bdev": "Nvme1n1", 00:08:39.202 "offset_blocks": 256, 00:08:39.202 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:39.202 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:39.202 "partition_name": "SPDK_TEST_first" 00:08:39.202 } 00:08:39.202 } 00:08:39.202 } 00:08:39.202 ]' 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.202 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:39.202 { 00:08:39.202 "name": "Nvme1n1p2", 00:08:39.203 "aliases": [ 00:08:39.203 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:39.203 ], 00:08:39.203 "product_name": "GPT Disk", 00:08:39.203 "block_size": 4096, 00:08:39.203 "num_blocks": 655103, 00:08:39.203 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:39.203 "assigned_rate_limits": { 00:08:39.203 "rw_ios_per_sec": 0, 00:08:39.203 "rw_mbytes_per_sec": 0, 00:08:39.203 "r_mbytes_per_sec": 0, 00:08:39.203 "w_mbytes_per_sec": 0 00:08:39.203 }, 00:08:39.203 "claimed": false, 00:08:39.203 "zoned": false, 00:08:39.203 "supported_io_types": { 00:08:39.203 "read": true, 00:08:39.203 "write": true, 00:08:39.203 "unmap": true, 00:08:39.203 "flush": true, 00:08:39.203 "reset": true, 00:08:39.203 "nvme_admin": false, 00:08:39.203 "nvme_io": false, 00:08:39.203 "nvme_io_md": false, 00:08:39.203 "write_zeroes": true, 00:08:39.203 "zcopy": false, 00:08:39.203 "get_zone_info": false, 00:08:39.203 "zone_management": false, 00:08:39.203 "zone_append": false, 00:08:39.203 "compare": true, 00:08:39.203 "compare_and_write": false, 00:08:39.203 "abort": true, 00:08:39.203 "seek_hole": false, 00:08:39.203 "seek_data": false, 00:08:39.203 "copy": true, 00:08:39.203 "nvme_iov_md": false 00:08:39.203 }, 00:08:39.203 "driver_specific": { 00:08:39.203 "gpt": { 00:08:39.203 "base_bdev": "Nvme1n1", 00:08:39.203 "offset_blocks": 655360, 00:08:39.203 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:39.203 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:39.203 "partition_name": "SPDK_TEST_second" 00:08:39.203 } 00:08:39.203 } 00:08:39.203 } 00:08:39.203 ]' 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62329 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62329 ']' 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62329 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.203 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62329 00:08:39.463 killing process with pid 62329 00:08:39.463 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.463 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.463 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62329' 00:08:39.463 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62329 00:08:39.463 10:07:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62329 00:08:41.388 00:08:41.388 real 0m3.459s 00:08:41.388 user 0m3.508s 00:08:41.388 sys 0m0.514s 00:08:41.388 10:07:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.388 ************************************ 00:08:41.388 END TEST bdev_gpt_uuid 00:08:41.388 ************************************ 00:08:41.388 10:07:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:41.388 10:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:41.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.647 Waiting for block devices as requested 00:08:41.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.648 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.908 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.203 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:47.203 10:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:47.203 10:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:47.462 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:47.462 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:47.462 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:47.462 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:47.462 10:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:47.462 00:08:47.462 real 1m6.584s 00:08:47.462 user 1m22.796s 00:08:47.462 sys 0m9.684s 00:08:47.462 10:07:50 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.462 ************************************ 00:08:47.462 END TEST blockdev_nvme_gpt 00:08:47.462 ************************************ 00:08:47.462 10:07:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 10:07:50 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:47.462 10:07:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.462 10:07:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.462 10:07:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 ************************************ 00:08:47.462 START TEST nvme 00:08:47.462 ************************************ 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:47.462 * Looking for test storage... 00:08:47.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.462 10:07:50 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.462 10:07:50 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.462 10:07:50 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.462 10:07:50 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.462 10:07:50 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.462 10:07:50 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:47.462 10:07:50 nvme -- scripts/common.sh@345 -- # : 1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.462 10:07:50 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.462 10:07:50 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@353 -- # local d=1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.462 10:07:50 nvme -- scripts/common.sh@355 -- # echo 1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.462 10:07:50 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@353 -- # local d=2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.462 10:07:50 nvme -- scripts/common.sh@355 -- # echo 2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.462 10:07:50 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.462 10:07:50 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.462 10:07:50 nvme -- scripts/common.sh@368 -- # return 0 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.462 --rc genhtml_branch_coverage=1 00:08:47.462 --rc genhtml_function_coverage=1 00:08:47.462 --rc genhtml_legend=1 00:08:47.462 --rc geninfo_all_blocks=1 00:08:47.462 --rc geninfo_unexecuted_blocks=1 00:08:47.462 00:08:47.462 ' 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.462 --rc genhtml_branch_coverage=1 00:08:47.462 --rc genhtml_function_coverage=1 00:08:47.462 --rc genhtml_legend=1 00:08:47.462 --rc geninfo_all_blocks=1 00:08:47.462 --rc geninfo_unexecuted_blocks=1 00:08:47.462 00:08:47.462 ' 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.462 --rc genhtml_branch_coverage=1 00:08:47.462 --rc genhtml_function_coverage=1 00:08:47.462 --rc genhtml_legend=1 00:08:47.462 --rc geninfo_all_blocks=1 00:08:47.462 --rc geninfo_unexecuted_blocks=1 00:08:47.462 00:08:47.462 ' 00:08:47.462 10:07:50 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.462 --rc genhtml_branch_coverage=1 00:08:47.462 --rc genhtml_function_coverage=1 00:08:47.462 --rc genhtml_legend=1 00:08:47.462 --rc geninfo_all_blocks=1 00:08:47.462 --rc geninfo_unexecuted_blocks=1 00:08:47.462 00:08:47.462 ' 00:08:47.462 10:07:50 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:48.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.594 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.594 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.594 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.594 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.594 10:07:51 nvme -- nvme/nvme.sh@79 -- # uname 00:08:48.594 10:07:51 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:48.594 10:07:51 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:48.594 10:07:51 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1071 -- # stubpid=62970 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:48.594 Waiting for stub to ready for secondary processes... 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/62970 ]] 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:48.594 10:07:51 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:48.852 [2024-10-17 10:07:51.687781] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:08:48.852 [2024-10-17 10:07:51.687899] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:49.419 [2024-10-17 10:07:52.479085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.678 [2024-10-17 10:07:52.577523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.678 [2024-10-17 10:07:52.577712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.678 [2024-10-17 10:07:52.577831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.678 [2024-10-17 10:07:52.593853] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:49.678 [2024-10-17 10:07:52.594024] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.678 [2024-10-17 10:07:52.608027] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:49.678 [2024-10-17 10:07:52.608302] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:49.678 [2024-10-17 10:07:52.613514] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.678 [2024-10-17 10:07:52.614027] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:49.678 [2024-10-17 10:07:52.614210] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:49.678 [2024-10-17 10:07:52.617807] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.678 [2024-10-17 10:07:52.618241] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:49.678 [2024-10-17 10:07:52.618478] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:49.678 [2024-10-17 10:07:52.620464] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.678 [2024-10-17 10:07:52.620627] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:49.678 [2024-10-17 10:07:52.620698] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:49.678 [2024-10-17 10:07:52.620752] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:49.678 [2024-10-17 10:07:52.620799] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:49.678 done. 00:08:49.678 10:07:52 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:49.678 10:07:52 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:49.678 10:07:52 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:49.678 10:07:52 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:49.678 10:07:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.678 10:07:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.678 ************************************ 00:08:49.678 START TEST nvme_reset 00:08:49.678 ************************************ 00:08:49.678 10:07:52 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:49.936 Initializing NVMe Controllers 00:08:49.936 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:49.936 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:49.936 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:49.936 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:49.936 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:49.936 ************************************ 00:08:49.936 END TEST nvme_reset 00:08:49.936 ************************************ 00:08:49.936 00:08:49.936 real 0m0.198s 00:08:49.936 user 0m0.078s 00:08:49.936 sys 0m0.073s 00:08:49.936 10:07:52 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.936 10:07:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 10:07:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:49.936 10:07:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.936 10:07:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.936 10:07:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 ************************************ 00:08:49.936 START TEST nvme_identify 00:08:49.936 ************************************ 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:49.936 10:07:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:49.936 10:07:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:49.936 10:07:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.936 10:07:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:49.936 10:07:52 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:49.936 10:07:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:50.196 [2024-10-17 10:07:53.158543] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 62991 terminated unexpected 00:08:50.196 ===================================================== 00:08:50.196 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.196 ===================================================== 00:08:50.196 Controller Capabilities/Features 00:08:50.196 ================================ 00:08:50.196 Vendor ID: 1b36 00:08:50.196 Subsystem Vendor ID: 1af4 00:08:50.196 Serial Number: 12341 00:08:50.196 Model Number: QEMU NVMe Ctrl 00:08:50.196 Firmware Version: 8.0.0 00:08:50.196 Recommended Arb Burst: 6 00:08:50.196 IEEE OUI Identifier: 00 54 52 00:08:50.196 Multi-path I/O 00:08:50.196 May have multiple subsystem ports: No 00:08:50.196 May have multiple controllers: No 00:08:50.196 Associated with SR-IOV VF: No 00:08:50.196 Max Data Transfer Size: 524288 00:08:50.196 Max Number of Namespaces: 256 00:08:50.196 Max Number of I/O Queues: 64 00:08:50.196 NVMe Specification Version (VS): 1.4 00:08:50.196 NVMe Specification Version (Identify): 1.4 00:08:50.196 Maximum Queue Entries: 2048 00:08:50.196 Contiguous Queues Required: Yes 00:08:50.196 Arbitration Mechanisms Supported 00:08:50.196 Weighted Round Robin: Not Supported 00:08:50.196 Vendor Specific: Not Supported 00:08:50.196 Reset Timeout: 7500 ms 00:08:50.196 Doorbell Stride: 4 bytes 00:08:50.196 NVM Subsystem Reset: Not Supported 00:08:50.196 Command Sets Supported 00:08:50.196 NVM Command Set: Supported 00:08:50.196 Boot Partition: Not Supported 00:08:50.196 Memory Page Size Minimum: 4096 bytes 00:08:50.196 Memory Page Size Maximum: 65536 bytes 00:08:50.196 Persistent Memory Region: Not Supported 00:08:50.196 Optional Asynchronous Events Supported 00:08:50.197 Namespace Attribute Notices: Supported 00:08:50.197 Firmware Activation Notices: Not Supported 00:08:50.197 ANA Change Notices: Not Supported 00:08:50.197 PLE Aggregate Log Change Notices: Not Supported 00:08:50.197 LBA Status Info Alert Notices: Not Supported 00:08:50.197 EGE Aggregate Log Change Notices: Not Supported 00:08:50.197 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.197 Zone Descriptor Change Notices: Not Supported 00:08:50.197 Discovery Log Change Notices: Not Supported 00:08:50.197 Controller Attributes 00:08:50.197 128-bit Host Identifier: Not Supported 00:08:50.197 Non-Operational Permissive Mode: Not Supported 00:08:50.197 NVM Sets: Not Supported 00:08:50.197 Read Recovery Levels: Not Supported 00:08:50.197 Endurance Groups: Not Supported 00:08:50.197 Predictable Latency Mode: Not Supported 00:08:50.197 Traffic Based Keep ALive: Not Supported 00:08:50.197 Namespace Granularity: Not Supported 00:08:50.197 SQ Associations: Not Supported 00:08:50.197 UUID List: Not Supported 00:08:50.197 Multi-Domain Subsystem: Not Supported 00:08:50.197 Fixed Capacity Management: Not Supported 00:08:50.197 Variable Capacity Management: Not Supported 00:08:50.197 Delete Endurance Group: Not Supported 00:08:50.197 Delete NVM Set: Not Supported 00:08:50.197 Extended LBA Formats Supported: Supported 00:08:50.197 Flexible Data Placement Supported: Not Supported 00:08:50.197 00:08:50.197 Controller Memory Buffer Support 00:08:50.197 ================================ 00:08:50.197 Supported: No 00:08:50.197 00:08:50.197 Persistent Memory Region Support 00:08:50.197 ================================ 00:08:50.197 Supported: No 00:08:50.197 00:08:50.197 Admin Command Set Attributes 00:08:50.197 ============================ 00:08:50.197 Security Send/Receive: Not Supported 00:08:50.197 Format NVM: Supported 00:08:50.197 Firmware Activate/Download: Not Supported 00:08:50.197 Namespace Management: Supported 00:08:50.197 Device Self-Test: Not Supported 00:08:50.197 Directives: Supported 00:08:50.197 NVMe-MI: Not Supported 00:08:50.197 Virtualization Management: Not Supported 00:08:50.197 Doorbell Buffer Config: Supported 00:08:50.197 Get LBA Status Capability: Not Supported 00:08:50.197 Command & Feature Lockdown Capability: Not Supported 00:08:50.197 Abort Command Limit: 4 00:08:50.197 Async Event Request Limit: 4 00:08:50.197 Number of Firmware Slots: N/A 00:08:50.197 Firmware Slot 1 Read-Only: N/A 00:08:50.197 Firmware Activation Without Reset: N/A 00:08:50.197 Multiple Update Detection Support: N/A 00:08:50.197 Firmware Update Granularity: No Information Provided 00:08:50.197 Per-Namespace SMART Log: Yes 00:08:50.197 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.197 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:50.197 Command Effects Log Page: Supported 00:08:50.197 Get Log Page Extended Data: Supported 00:08:50.197 Telemetry Log Pages: Not Supported 00:08:50.197 Persistent Event Log Pages: Not Supported 00:08:50.197 Supported Log Pages Log Page: May Support 00:08:50.197 Commands Supported & Effects Log Page: Not Supported 00:08:50.197 Feature Identifiers & Effects Log Page:May Support 00:08:50.197 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.197 Data Area 4 for Telemetry Log: Not Supported 00:08:50.197 Error Log Page Entries Supported: 1 00:08:50.197 Keep Alive: Not Supported 00:08:50.197 00:08:50.197 NVM Command Set Attributes 00:08:50.197 ========================== 00:08:50.197 Submission Queue Entry Size 00:08:50.197 Max: 64 00:08:50.197 Min: 64 00:08:50.197 Completion Queue Entry Size 00:08:50.197 Max: 16 00:08:50.197 Min: 16 00:08:50.197 Number of Namespaces: 256 00:08:50.197 Compare Command: Supported 00:08:50.197 Write Uncorrectable Command: Not Supported 00:08:50.197 Dataset Management Command: Supported 00:08:50.197 Write Zeroes Command: Supported 00:08:50.197 Set Features Save Field: Supported 00:08:50.197 Reservations: Not Supported 00:08:50.197 Timestamp: Supported 00:08:50.197 Copy: Supported 00:08:50.197 Volatile Write Cache: Present 00:08:50.197 Atomic Write Unit (Normal): 1 00:08:50.197 Atomic Write Unit (PFail): 1 00:08:50.197 Atomic Compare & Write Unit: 1 00:08:50.197 Fused Compare & Write: Not Supported 00:08:50.197 Scatter-Gather List 00:08:50.197 SGL Command Set: Supported 00:08:50.197 SGL Keyed: Not Supported 00:08:50.197 SGL Bit Bucket Descriptor: Not Supported 00:08:50.197 SGL Metadata Pointer: Not Supported 00:08:50.197 Oversized SGL: Not Supported 00:08:50.197 SGL Metadata Address: Not Supported 00:08:50.197 SGL Offset: Not Supported 00:08:50.197 Transport SGL Data Block: Not Supported 00:08:50.197 Replay Protected Memory Block: Not Supported 00:08:50.197 00:08:50.197 Firmware Slot Information 00:08:50.197 ========================= 00:08:50.197 Active slot: 1 00:08:50.197 Slot 1 Firmware Revision: 1.0 00:08:50.197 00:08:50.197 00:08:50.197 Commands Supported and Effects 00:08:50.197 ============================== 00:08:50.197 Admin Commands 00:08:50.197 -------------- 00:08:50.197 Delete I/O Submission Queue (00h): Supported 00:08:50.197 Create I/O Submission Queue (01h): Supported 00:08:50.197 Get Log Page (02h): Supported 00:08:50.197 Delete I/O Completion Queue (04h): Supported 00:08:50.197 Create I/O Completion Queue (05h): Supported 00:08:50.197 Identify (06h): Supported 00:08:50.197 Abort (08h): Supported 00:08:50.197 Set Features (09h): Supported 00:08:50.197 Get Features (0Ah): Supported 00:08:50.197 Asynchronous Event Request (0Ch): Supported 00:08:50.197 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.197 Directive Send (19h): Supported 00:08:50.197 Directive Receive (1Ah): Supported 00:08:50.197 Virtualization Management (1Ch): Supported 00:08:50.197 Doorbell Buffer Config (7Ch): Supported 00:08:50.197 Format NVM (80h): Supported LBA-Change 00:08:50.197 I/O Commands 00:08:50.197 ------------ 00:08:50.197 Flush (00h): Supported LBA-Change 00:08:50.197 Write (01h): Supported LBA-Change 00:08:50.197 Read (02h): Supported 00:08:50.197 Compare (05h): Supported 00:08:50.197 Write Zeroes (08h): Supported LBA-Change 00:08:50.197 Dataset Management (09h): Supported LBA-Change 00:08:50.197 Unknown (0Ch): Supported 00:08:50.197 Unknown (12h): Supported 00:08:50.197 Copy (19h): Supported LBA-Change 00:08:50.197 Unknown (1Dh): Supported LBA-Change 00:08:50.197 00:08:50.197 Error Log 00:08:50.197 ========= 00:08:50.197 00:08:50.197 Arbitration 00:08:50.197 =========== 00:08:50.197 Arbitration Burst: no limit 00:08:50.197 00:08:50.197 Power Management 00:08:50.197 ================ 00:08:50.197 Number of Power States: 1 00:08:50.197 Current Power State: Power State #0 00:08:50.197 Power State #0: 00:08:50.197 Max Power: 25.00 W 00:08:50.197 Non-Operational State: Operational 00:08:50.197 Entry Latency: 16 microseconds 00:08:50.197 Exit Latency: 4 microseconds 00:08:50.197 Relative Read Throughput: 0 00:08:50.197 Relative Read Latency: 0 00:08:50.197 Relative Write Throughput: 0 00:08:50.197 Relative Write Latency: 0 00:08:50.197 Idle Power[2024-10-17 10:07:53.160514] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 62991 terminated unexpected 00:08:50.197 : Not Reported 00:08:50.197 Active Power: Not Reported 00:08:50.197 Non-Operational Permissive Mode: Not Supported 00:08:50.197 00:08:50.197 Health Information 00:08:50.197 ================== 00:08:50.197 Critical Warnings: 00:08:50.197 Available Spare Space: OK 00:08:50.197 Temperature: OK 00:08:50.197 Device Reliability: OK 00:08:50.197 Read Only: No 00:08:50.197 Volatile Memory Backup: OK 00:08:50.197 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.197 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.197 Available Spare: 0% 00:08:50.197 Available Spare Threshold: 0% 00:08:50.197 Life Percentage Used: 0% 00:08:50.197 Data Units Read: 967 00:08:50.197 Data Units Written: 838 00:08:50.197 Host Read Commands: 46095 00:08:50.197 Host Write Commands: 44959 00:08:50.197 Controller Busy Time: 0 minutes 00:08:50.197 Power Cycles: 0 00:08:50.197 Power On Hours: 0 hours 00:08:50.197 Unsafe Shutdowns: 0 00:08:50.197 Unrecoverable Media Errors: 0 00:08:50.197 Lifetime Error Log Entries: 0 00:08:50.197 Warning Temperature Time: 0 minutes 00:08:50.197 Critical Temperature Time: 0 minutes 00:08:50.197 00:08:50.197 Number of Queues 00:08:50.197 ================ 00:08:50.197 Number of I/O Submission Queues: 64 00:08:50.197 Number of I/O Completion Queues: 64 00:08:50.197 00:08:50.197 ZNS Specific Controller Data 00:08:50.197 ============================ 00:08:50.197 Zone Append Size Limit: 0 00:08:50.197 00:08:50.197 00:08:50.197 Active Namespaces 00:08:50.197 ================= 00:08:50.197 Namespace ID:1 00:08:50.197 Error Recovery Timeout: Unlimited 00:08:50.197 Command Set Identifier: NVM (00h) 00:08:50.197 Deallocate: Supported 00:08:50.197 Deallocated/Unwritten Error: Supported 00:08:50.197 Deallocated Read Value: All 0x00 00:08:50.197 Deallocate in Write Zeroes: Not Supported 00:08:50.197 Deallocated Guard Field: 0xFFFF 00:08:50.197 Flush: Supported 00:08:50.197 Reservation: Not Supported 00:08:50.198 Namespace Sharing Capabilities: Private 00:08:50.198 Size (in LBAs): 1310720 (5GiB) 00:08:50.198 Capacity (in LBAs): 1310720 (5GiB) 00:08:50.198 Utilization (in LBAs): 1310720 (5GiB) 00:08:50.198 Thin Provisioning: Not Supported 00:08:50.198 Per-NS Atomic Units: No 00:08:50.198 Maximum Single Source Range Length: 128 00:08:50.198 Maximum Copy Length: 128 00:08:50.198 Maximum Source Range Count: 128 00:08:50.198 NGUID/EUI64 Never Reused: No 00:08:50.198 Namespace Write Protected: No 00:08:50.198 Number of LBA Formats: 8 00:08:50.198 Current LBA Format: LBA Format #04 00:08:50.198 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.198 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.198 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.198 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.198 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.198 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.198 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.198 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.198 00:08:50.198 NVM Specific Namespace Data 00:08:50.198 =========================== 00:08:50.198 Logical Block Storage Tag Mask: 0 00:08:50.198 Protection Information Capabilities: 00:08:50.198 16b Guard Protection Information Storage Tag Support: No 00:08:50.198 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.198 Storage Tag Check Read Support: No 00:08:50.198 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.198 ===================================================== 00:08:50.198 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.198 ===================================================== 00:08:50.198 Controller Capabilities/Features 00:08:50.198 ================================ 00:08:50.198 Vendor ID: 1b36 00:08:50.198 Subsystem Vendor ID: 1af4 00:08:50.198 Serial Number: 12343 00:08:50.198 Model Number: QEMU NVMe Ctrl 00:08:50.198 Firmware Version: 8.0.0 00:08:50.198 Recommended Arb Burst: 6 00:08:50.198 IEEE OUI Identifier: 00 54 52 00:08:50.198 Multi-path I/O 00:08:50.198 May have multiple subsystem ports: No 00:08:50.198 May have multiple controllers: Yes 00:08:50.198 Associated with SR-IOV VF: No 00:08:50.198 Max Data Transfer Size: 524288 00:08:50.198 Max Number of Namespaces: 256 00:08:50.198 Max Number of I/O Queues: 64 00:08:50.198 NVMe Specification Version (VS): 1.4 00:08:50.198 NVMe Specification Version (Identify): 1.4 00:08:50.198 Maximum Queue Entries: 2048 00:08:50.198 Contiguous Queues Required: Yes 00:08:50.198 Arbitration Mechanisms Supported 00:08:50.198 Weighted Round Robin: Not Supported 00:08:50.198 Vendor Specific: Not Supported 00:08:50.198 Reset Timeout: 7500 ms 00:08:50.198 Doorbell Stride: 4 bytes 00:08:50.198 NVM Subsystem Reset: Not Supported 00:08:50.198 Command Sets Supported 00:08:50.198 NVM Command Set: Supported 00:08:50.198 Boot Partition: Not Supported 00:08:50.198 Memory Page Size Minimum: 4096 bytes 00:08:50.198 Memory Page Size Maximum: 65536 bytes 00:08:50.198 Persistent Memory Region: Not Supported 00:08:50.198 Optional Asynchronous Events Supported 00:08:50.198 Namespace Attribute Notices: Supported 00:08:50.198 Firmware Activation Notices: Not Supported 00:08:50.198 ANA Change Notices: Not Supported 00:08:50.198 PLE Aggregate Log Change Notices: Not Supported 00:08:50.198 LBA Status Info Alert Notices: Not Supported 00:08:50.198 EGE Aggregate Log Change Notices: Not Supported 00:08:50.198 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.198 Zone Descriptor Change Notices: Not Supported 00:08:50.198 Discovery Log Change Notices: Not Supported 00:08:50.198 Controller Attributes 00:08:50.198 128-bit Host Identifier: Not Supported 00:08:50.198 Non-Operational Permissive Mode: Not Supported 00:08:50.198 NVM Sets: Not Supported 00:08:50.198 Read Recovery Levels: Not Supported 00:08:50.198 Endurance Groups: Supported 00:08:50.198 Predictable Latency Mode: Not Supported 00:08:50.198 Traffic Based Keep ALive: Not Supported 00:08:50.198 Namespace Granularity: Not Supported 00:08:50.198 SQ Associations: Not Supported 00:08:50.198 UUID List: Not Supported 00:08:50.198 Multi-Domain Subsystem: Not Supported 00:08:50.198 Fixed Capacity Management: Not Supported 00:08:50.198 Variable Capacity Management: Not Supported 00:08:50.198 Delete Endurance Group: Not Supported 00:08:50.198 Delete NVM Set: Not Supported 00:08:50.198 Extended LBA Formats Supported: Supported 00:08:50.198 Flexible Data Placement Supported: Supported 00:08:50.198 00:08:50.198 Controller Memory Buffer Support 00:08:50.198 ================================ 00:08:50.198 Supported: No 00:08:50.198 00:08:50.198 Persistent Memory Region Support 00:08:50.198 ================================ 00:08:50.198 Supported: No 00:08:50.198 00:08:50.198 Admin Command Set Attributes 00:08:50.198 ============================ 00:08:50.198 Security Send/Receive: Not Supported 00:08:50.198 Format NVM: Supported 00:08:50.198 Firmware Activate/Download: Not Supported 00:08:50.198 Namespace Management: Supported 00:08:50.198 Device Self-Test: Not Supported 00:08:50.198 Directives: Supported 00:08:50.198 NVMe-MI: Not Supported 00:08:50.198 Virtualization Management: Not Supported 00:08:50.198 Doorbell Buffer Config: Supported 00:08:50.198 Get LBA Status Capability: Not Supported 00:08:50.198 Command & Feature Lockdown Capability: Not Supported 00:08:50.198 Abort Command Limit: 4 00:08:50.198 Async Event Request Limit: 4 00:08:50.198 Number of Firmware Slots: N/A 00:08:50.198 Firmware Slot 1 Read-Only: N/A 00:08:50.198 Firmware Activation Without Reset: N/A 00:08:50.198 Multiple Update Detection Support: N/A 00:08:50.198 Firmware Update Granularity: No Information Provided 00:08:50.198 Per-Namespace SMART Log: Yes 00:08:50.198 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.198 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:50.198 Command Effects Log Page: Supported 00:08:50.198 Get Log Page Extended Data: Supported 00:08:50.198 Telemetry Log Pages: Not Supported 00:08:50.198 Persistent Event Log Pages: Not Supported 00:08:50.198 Supported Log Pages Log Page: May Support 00:08:50.198 Commands Supported & Effects Log Page: Not Supported 00:08:50.198 Feature Identifiers & Effects Log Page:May Support 00:08:50.198 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.198 Data Area 4 for Telemetry Log: Not Supported 00:08:50.198 Error Log Page Entries Supported: 1 00:08:50.198 Keep Alive: Not Supported 00:08:50.198 00:08:50.198 NVM Command Set Attributes 00:08:50.198 ========================== 00:08:50.198 Submission Queue Entry Size 00:08:50.198 Max: 64 00:08:50.198 Min: 64 00:08:50.198 Completion Queue Entry Size 00:08:50.198 Max: 16 00:08:50.198 Min: 16 00:08:50.198 Number of Namespaces: 256 00:08:50.198 Compare Command: Supported 00:08:50.198 Write Uncorrectable Command: Not Supported 00:08:50.198 Dataset Management Command: Supported 00:08:50.198 Write Zeroes Command: Supported 00:08:50.198 Set Features Save Field: Supported 00:08:50.198 Reservations: Not Supported 00:08:50.198 Timestamp: Supported 00:08:50.198 Copy: Supported 00:08:50.198 Volatile Write Cache: Present 00:08:50.198 Atomic Write Unit (Normal): 1 00:08:50.198 Atomic Write Unit (PFail): 1 00:08:50.198 Atomic Compare & Write Unit: 1 00:08:50.198 Fused Compare & Write: Not Supported 00:08:50.198 Scatter-Gather List 00:08:50.198 SGL Command Set: Supported 00:08:50.198 SGL Keyed: Not Supported 00:08:50.198 SGL Bit Bucket Descriptor: Not Supported 00:08:50.198 SGL Metadata Pointer: Not Supported 00:08:50.198 Oversized SGL: Not Supported 00:08:50.198 SGL Metadata Address: Not Supported 00:08:50.198 SGL Offset: Not Supported 00:08:50.198 Transport SGL Data Block: Not Supported 00:08:50.198 Replay Protected Memory Block: Not Supported 00:08:50.198 00:08:50.198 Firmware Slot Information 00:08:50.198 ========================= 00:08:50.198 Active slot: 1 00:08:50.198 Slot 1 Firmware Revision: 1.0 00:08:50.198 00:08:50.198 00:08:50.198 Commands Supported and Effects 00:08:50.198 ============================== 00:08:50.198 Admin Commands 00:08:50.198 -------------- 00:08:50.198 Delete I/O Submission Queue (00h): Supported 00:08:50.198 Create I/O Submission Queue (01h): Supported 00:08:50.198 Get Log Page (02h): Supported 00:08:50.198 Delete I/O Completion Queue (04h): Supported 00:08:50.198 Create I/O Completion Queue (05h): Supported 00:08:50.198 Identify (06h): Supported 00:08:50.198 Abort (08h): Supported 00:08:50.198 Set Features (09h): Supported 00:08:50.198 Get Features (0Ah): Supported 00:08:50.198 Asynchronous Event Request (0Ch): Supported 00:08:50.198 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.198 Directive Send (19h): Supported 00:08:50.198 Directive Receive (1Ah): Supported 00:08:50.199 Virtualization Management (1Ch): Supported 00:08:50.199 Doorbell Buffer Config (7Ch): Supported 00:08:50.199 Format NVM (80h): Supported LBA-Change 00:08:50.199 I/O Commands 00:08:50.199 ------------ 00:08:50.199 Flush (00h): Supported LBA-Change 00:08:50.199 Write (01h): Supported LBA-Change 00:08:50.199 Read (02h): Supported 00:08:50.199 Compare (05h): Supported 00:08:50.199 Write Zeroes (08h): Supported LBA-Change 00:08:50.199 Dataset Management (09h): Supported LBA-Change 00:08:50.199 Unknown (0Ch): Supported 00:08:50.199 Unknown (12h): Supported 00:08:50.199 Copy (19h): Supported LBA-Change 00:08:50.199 Unknown (1Dh): Supported LBA-Change 00:08:50.199 00:08:50.199 Error Log 00:08:50.199 ========= 00:08:50.199 00:08:50.199 Arbitration 00:08:50.199 =========== 00:08:50.199 Arbitration Burst: no limit 00:08:50.199 00:08:50.199 Power Management 00:08:50.199 ================ 00:08:50.199 Number of Power States: 1 00:08:50.199 Current Power State: Power State #0 00:08:50.199 Power State #0: 00:08:50.199 Max Power: 25.00 W 00:08:50.199 Non-Operational State: Operational 00:08:50.199 Entry Latency: 16 microseconds 00:08:50.199 Exit Latency: 4 microseconds 00:08:50.199 Relative Read Throughput: 0 00:08:50.199 Relative Read Latency: 0 00:08:50.199 Relative Write Throughput: 0 00:08:50.199 Relative Write Latency: 0 00:08:50.199 Idle Power: Not Reported 00:08:50.199 Active Power: Not Reported 00:08:50.199 Non-Operational Permissive Mode: Not Supported 00:08:50.199 00:08:50.199 Health Information 00:08:50.199 ================== 00:08:50.199 Critical Warnings: 00:08:50.199 Available Spare Space: OK 00:08:50.199 Temperature: OK 00:08:50.199 Device Reliability: OK 00:08:50.199 Read Only: No 00:08:50.199 Volatile Memory Backup: OK 00:08:50.199 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.199 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.199 Available Spare: 0% 00:08:50.199 Available Spare Threshold: 0% 00:08:50.199 Life Percentage Used: 0% 00:08:50.199 Data Units Read: 737 00:08:50.199 Data Units Written: 666 00:08:50.199 Host Read Commands: 32412 00:08:50.199 Host Write Commands: 31835 00:08:50.199 Controller Busy Time: 0 minutes 00:08:50.199 Power Cycles: 0 00:08:50.199 Power On Hours: 0 hours 00:08:50.199 Unsafe Shutdowns: 0 00:08:50.199 Unrecoverable Media Errors: 0 00:08:50.199 Lifetime Error Log Entries: 0 00:08:50.199 Warning Temperature Time: 0 minutes 00:08:50.199 Critical Temperature Time: 0 minutes 00:08:50.199 00:08:50.199 Number of Queues 00:08:50.199 ================ 00:08:50.199 Number of I/O Submission Queues: 64 00:08:50.199 Number of I/O Completion Queues: 64 00:08:50.199 00:08:50.199 ZNS Specific Controller Data 00:08:50.199 ============================ 00:08:50.199 Zone Append Size Limit: 0 00:08:50.199 00:08:50.199 00:08:50.199 Active Namespaces 00:08:50.199 ================= 00:08:50.199 Namespace ID:1 00:08:50.199 Error Recovery Timeout: Unlimited 00:08:50.199 Command Set Identifier: NVM (00h) 00:08:50.199 Deallocate: Supported 00:08:50.199 Deallocated/Unwritten Error: Supported 00:08:50.199 Deallocated Read Value: All 0x00 00:08:50.199 Deallocate in Write Zeroes: Not Supported 00:08:50.199 Deallocated Guard Field: 0xFFFF 00:08:50.199 Flush: Supported 00:08:50.199 Reservation: Not Supported 00:08:50.199 Namespace Sharing Capabilities: Multiple Controllers 00:08:50.199 Size (in LBAs): 262144 (1GiB) 00:08:50.199 Capacity (in LBAs): 262144 (1GiB) 00:08:50.199 Utilization (in LBAs): 262144 (1GiB) 00:08:50.199 Thin Provisioning: Not Supported 00:08:50.199 Per-NS Atomic Units: No 00:08:50.199 Maximum Single Source Range Length: 128 00:08:50.199 Maximum Copy Length: 128 00:08:50.199 Maximum Source Range Count: 128 00:08:50.199 NGUID/EUI64 Never Reused: No 00:08:50.199 Namespace Write Protected: No 00:08:50.199 Endurance group ID: 1 00:08:50.199 Number of LBA Formats: 8 00:08:50.199 Current LBA Format: LBA Format #04 00:08:50.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.199 00:08:50.199 Get Feature FDP: 00:08:50.199 ================ 00:08:50.199 Enabled: Yes 00:08:50.199 FDP configuration index: 0 00:08:50.199 00:08:50.199 FDP configurations log page 00:08:50.199 =========================== 00:08:50.199 Number of FDP configurations: 1 00:08:50.199 Version: 0 00:08:50.199 Size: 112 00:08:50.199 FDP Configuration Descriptor: 0 00:08:50.199 Descriptor Size: 96 00:08:50.199 Reclaim Group Identifier format: 2 00:08:50.199 FDP Volatile Write Cache: Not Present 00:08:50.199 FDP Configuration: Valid 00:08:50.199 Vendor Specific Size: 0 00:08:50.199 Number of Reclaim Groups: 2 00:08:50.199 Number of Recalim Unit Handles: 8 00:08:50.199 Max Placement Identifiers: 128 00:08:50.199 Number of Namespaces Suppprted: 256 00:08:50.199 Reclaim unit Nominal Size: 6000000 bytes 00:08:50.199 Estimated Reclaim Unit Time Limit: Not Reported 00:08:50.199 RUH Desc #000: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #001: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #002: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #003: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #004: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #005: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #006: RUH Type: Initially Isolated 00:08:50.199 RUH Desc #007: RUH Type: Initially Isolated 00:08:50.199 00:08:50.199 FDP reclaim unit handle usage log page 00:08:50.199 ====================================== 00:08:50.199 Number of Reclaim Unit Handles: 8 00:08:50.199 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:50.199 RUH Usage Desc #001: RUH Attributes: Unused 00:08:50.199 RUH Usage Desc #002: RUH Attributes: Unused 00:08:50.199 RUH Usage Desc #003: RUH Attributes: Unused 00:08:50.199 RUH Usage Desc #004: RUH Attributes: Unused 00:08:50.199 RU[2024-10-17 10:07:53.165753] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 62991 terminated unexpected 00:08:50.199 H Usage Desc #005: RUH Attributes: Unused 00:08:50.199 RUH Usage Desc #006: RUH Attributes: Unused 00:08:50.199 RUH Usage Desc #007: RUH Attributes: Unused 00:08:50.199 00:08:50.199 FDP statistics log page 00:08:50.199 ======================= 00:08:50.199 Host bytes with metadata written: 416063488 00:08:50.199 Media bytes with metadata written: 416108544 00:08:50.199 Media bytes erased: 0 00:08:50.199 00:08:50.199 FDP events log page 00:08:50.199 =================== 00:08:50.199 Number of FDP events: 0 00:08:50.199 00:08:50.199 NVM Specific Namespace Data 00:08:50.199 =========================== 00:08:50.199 Logical Block Storage Tag Mask: 0 00:08:50.199 Protection Information Capabilities: 00:08:50.199 16b Guard Protection Information Storage Tag Support: No 00:08:50.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.199 Storage Tag Check Read Support: No 00:08:50.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.199 ===================================================== 00:08:50.199 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.199 ===================================================== 00:08:50.199 Controller Capabilities/Features 00:08:50.199 ================================ 00:08:50.199 Vendor ID: 1b36 00:08:50.199 Subsystem Vendor ID: 1af4 00:08:50.199 Serial Number: 12340 00:08:50.199 Model Number: QEMU NVMe Ctrl 00:08:50.199 Firmware Version: 8.0.0 00:08:50.199 Recommended Arb Burst: 6 00:08:50.199 IEEE OUI Identifier: 00 54 52 00:08:50.200 Multi-path I/O 00:08:50.200 May have multiple subsystem ports: No 00:08:50.200 May have multiple controllers: No 00:08:50.200 Associated with SR-IOV VF: No 00:08:50.200 Max Data Transfer Size: 524288 00:08:50.200 Max Number of Namespaces: 256 00:08:50.200 Max Number of I/O Queues: 64 00:08:50.200 NVMe Specification Version (VS): 1.4 00:08:50.200 NVMe Specification Version (Identify): 1.4 00:08:50.200 Maximum Queue Entries: 2048 00:08:50.200 Contiguous Queues Required: Yes 00:08:50.200 Arbitration Mechanisms Supported 00:08:50.200 Weighted Round Robin: Not Supported 00:08:50.200 Vendor Specific: Not Supported 00:08:50.200 Reset Timeout: 7500 ms 00:08:50.200 Doorbell Stride: 4 bytes 00:08:50.200 NVM Subsystem Reset: Not Supported 00:08:50.200 Command Sets Supported 00:08:50.200 NVM Command Set: Supported 00:08:50.200 Boot Partition: Not Supported 00:08:50.200 Memory Page Size Minimum: 4096 bytes 00:08:50.200 Memory Page Size Maximum: 65536 bytes 00:08:50.200 Persistent Memory Region: Not Supported 00:08:50.200 Optional Asynchronous Events Supported 00:08:50.200 Namespace Attribute Notices: Supported 00:08:50.200 Firmware Activation Notices: Not Supported 00:08:50.200 ANA Change Notices: Not Supported 00:08:50.200 PLE Aggregate Log Change Notices: Not Supported 00:08:50.200 LBA Status Info Alert Notices: Not Supported 00:08:50.200 EGE Aggregate Log Change Notices: Not Supported 00:08:50.200 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.200 Zone Descriptor Change Notices: Not Supported 00:08:50.200 Discovery Log Change Notices: Not Supported 00:08:50.200 Controller Attributes 00:08:50.200 128-bit Host Identifier: Not Supported 00:08:50.200 Non-Operational Permissive Mode: Not Supported 00:08:50.200 NVM Sets: Not Supported 00:08:50.200 Read Recovery Levels: Not Supported 00:08:50.200 Endurance Groups: Not Supported 00:08:50.200 Predictable Latency Mode: Not Supported 00:08:50.200 Traffic Based Keep ALive: Not Supported 00:08:50.200 Namespace Granularity: Not Supported 00:08:50.200 SQ Associations: Not Supported 00:08:50.200 UUID List: Not Supported 00:08:50.200 Multi-Domain Subsystem: Not Supported 00:08:50.200 Fixed Capacity Management: Not Supported 00:08:50.200 Variable Capacity Management: Not Supported 00:08:50.200 Delete Endurance Group: Not Supported 00:08:50.200 Delete NVM Set: Not Supported 00:08:50.200 Extended LBA Formats Supported: Supported 00:08:50.200 Flexible Data Placement Supported: Not Supported 00:08:50.200 00:08:50.200 Controller Memory Buffer Support 00:08:50.200 ================================ 00:08:50.200 Supported: No 00:08:50.200 00:08:50.200 Persistent Memory Region Support 00:08:50.200 ================================ 00:08:50.200 Supported: No 00:08:50.200 00:08:50.200 Admin Command Set Attributes 00:08:50.200 ============================ 00:08:50.200 Security Send/Receive: Not Supported 00:08:50.200 Format NVM: Supported 00:08:50.200 Firmware Activate/Download: Not Supported 00:08:50.200 Namespace Management: Supported 00:08:50.200 Device Self-Test: Not Supported 00:08:50.200 Directives: Supported 00:08:50.200 NVMe-MI: Not Supported 00:08:50.200 Virtualization Management: Not Supported 00:08:50.200 Doorbell Buffer Config: Supported 00:08:50.200 Get LBA Status Capability: Not Supported 00:08:50.200 Command & Feature Lockdown Capability: Not Supported 00:08:50.200 Abort Command Limit: 4 00:08:50.200 Async Event Request Limit: 4 00:08:50.200 Number of Firmware Slots: N/A 00:08:50.200 Firmware Slot 1 Read-Only: N/A 00:08:50.200 Firmware Activation Without Reset: N/A 00:08:50.200 Multiple Update Detection Support: N/A 00:08:50.200 Firmware Update Granularity: No Information Provided 00:08:50.200 Per-Namespace SMART Log: Yes 00:08:50.200 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.200 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:50.200 Command Effects Log Page: Supported 00:08:50.200 Get Log Page Extended Data: Supported 00:08:50.200 Telemetry Log Pages: Not Supported 00:08:50.200 Persistent Event Log Pages: Not Supported 00:08:50.200 Supported Log Pages Log Page: May Support 00:08:50.200 Commands Supported & Effects Log Page: Not Supported 00:08:50.200 Feature Identifiers & Effects Log Page:May Support 00:08:50.200 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.200 Data Area 4 for Telemetry Log: Not Supported 00:08:50.200 Error Log Page Entries Supported: 1 00:08:50.200 Keep Alive: Not Supported 00:08:50.200 00:08:50.200 NVM Command Set Attributes 00:08:50.200 ========================== 00:08:50.200 Submission Queue Entry Size 00:08:50.200 Max: 64 00:08:50.200 Min: 64 00:08:50.200 Completion Queue Entry Size 00:08:50.200 Max: 16 00:08:50.200 Min: 16 00:08:50.200 Number of Namespaces: 256 00:08:50.200 Compare Command: Supported 00:08:50.200 Write Uncorrectable Command: Not Supported 00:08:50.200 Dataset Management Command: Supported 00:08:50.200 Write Zeroes Command: Supported 00:08:50.200 Set Features Save Field: Supported 00:08:50.200 Reservations: Not Supported 00:08:50.200 Timestamp: Supported 00:08:50.200 Copy: Supported 00:08:50.200 Volatile Write Cache: Present 00:08:50.200 Atomic Write Unit (Normal): 1 00:08:50.200 Atomic Write Unit (PFail): 1 00:08:50.200 Atomic Compare & Write Unit: 1 00:08:50.200 Fused Compare & Write: Not Supported 00:08:50.200 Scatter-Gather List 00:08:50.200 SGL Command Set: Supported 00:08:50.200 SGL Keyed: Not Supported 00:08:50.200 SGL Bit Bucket Descriptor: Not Supported 00:08:50.200 SGL Metadata Pointer: Not Supported 00:08:50.200 Oversized SGL: Not Supported 00:08:50.200 SGL Metadata Address: Not Supported 00:08:50.200 SGL Offset: Not Supported 00:08:50.200 Transport SGL Data Block: Not Supported 00:08:50.200 Replay Protected Memory Block: Not Supported 00:08:50.200 00:08:50.200 Firmware Slot Information 00:08:50.200 ========================= 00:08:50.200 Active slot: 1 00:08:50.200 Slot 1 Firmware Revision: 1.0 00:08:50.200 00:08:50.200 00:08:50.200 Commands Supported and Effects 00:08:50.200 ============================== 00:08:50.200 Admin Commands 00:08:50.200 -------------- 00:08:50.200 Delete I/O Submission Queue (00h): Supported 00:08:50.200 Create I/O Submission Queue (01h): Supported 00:08:50.200 Get Log Page (02h): Supported 00:08:50.200 Delete I/O Completion Queue (04h): Supported 00:08:50.200 Create I/O Completion Queue (05h): Supported 00:08:50.200 Identify (06h): Supported 00:08:50.200 Abort (08h): Supported 00:08:50.200 Set Features (09h): Supported 00:08:50.200 Get Features (0Ah): Supported 00:08:50.200 Asynchronous Event Request (0Ch): Supported 00:08:50.200 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.200 Directive Send (19h): Supported 00:08:50.200 Directive Receive (1Ah): Supported 00:08:50.200 Virtualization Management (1Ch): Supported 00:08:50.200 Doorbell Buffer Config (7Ch): Supported 00:08:50.200 Format NVM (80h): Supported LBA-Change 00:08:50.200 I/O Commands 00:08:50.200 ------------ 00:08:50.200 Flush (00h): Supported LBA-Change 00:08:50.200 Write (01h): Supported LBA-Change 00:08:50.200 Read (02h): Supported 00:08:50.200 Compare (05h): Supported 00:08:50.200 Write Zeroes (08h): Supported LBA-Change 00:08:50.200 Dataset Management (09h): Supported LBA-Change 00:08:50.200 Unknown (0Ch): Supported 00:08:50.200 Unknown (12h): Supported 00:08:50.200 Copy (19h): Supported LBA-Change 00:08:50.200 Unknown (1Dh): Supported LBA-Change 00:08:50.200 00:08:50.200 Error Log 00:08:50.200 ========= 00:08:50.200 00:08:50.200 Arbitration 00:08:50.200 =========== 00:08:50.200 Arbitration Burst: no limit 00:08:50.200 00:08:50.200 Power Management 00:08:50.200 ================ 00:08:50.200 Number of Power States: 1 00:08:50.200 Current Power State: Power State #0 00:08:50.200 Power State #0: 00:08:50.200 Max Power: 25.00 W 00:08:50.200 Non-Operational State: Operational 00:08:50.200 Entry Latency: 16 microseconds 00:08:50.200 Exit Latency: 4 microseconds 00:08:50.200 Relative Read Throughput: 0 00:08:50.200 Relative Read Latency: 0 00:08:50.200 Relative Write Throughput: 0 00:08:50.200 Relative Write Latency: 0 00:08:50.200 Idle Power: Not Reported 00:08:50.200 Active Power: Not Reported 00:08:50.200 Non-Operational Permissive Mode: Not Supported 00:08:50.200 00:08:50.200 Health Information 00:08:50.200 ================== 00:08:50.200 Critical Warnings: 00:08:50.200 Available Spare Space: OK 00:08:50.200 Temperature: OK 00:08:50.200 Device Reliability: OK 00:08:50.200 Read Only: No 00:08:50.200 Volatile Memory Backup: OK 00:08:50.200 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.200 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.200 Available Spare: 0% 00:08:50.200 Available Spare Threshold: 0% 00:08:50.200 Life Percentage Used: 0% 00:08:50.200 Data Units Read: 631 00:08:50.200 Data Units Written: 559 00:08:50.200 Host Read Commands: 31226 00:08:50.200 Host Write Commands: 31012 00:08:50.200 Controller Busy Time: 0 minutes 00:08:50.200 Power Cycles: 0 00:08:50.200 Power On Hours: 0 hours 00:08:50.201 Unsafe Shutdowns: 0 00:08:50.201 Unrecoverable Media Errors: 0 00:08:50.201 Lifetime Error Log Entries: 0 00:08:50.201 Warning Temperature Time: 0 minutes 00:08:50.201 Critical Temperature Time: 0 minutes 00:08:50.201 00:08:50.201 Number of Queues 00:08:50.201 ================ 00:08:50.201 Number of I/O Submission Queues: 64 00:08:50.201 Number of I/O Completion Queues: 64 00:08:50.201 00:08:50.201 ZNS Specific Controller Data 00:08:50.201 ============================ 00:08:50.201 Zone Append Size Limit: 0 00:08:50.201 00:08:50.201 00:08:50.201 Active Namespaces 00:08:50.201 ================= 00:08:50.201 Namespace ID:1 00:08:50.201 Error Recovery Timeout: Unlimited 00:08:50.201 Command Set Identifier: NVM (00h) 00:08:50.201 Deallocate: Supported 00:08:50.201 Deallocated/Unwritten Error: Supported 00:08:50.201 Deallocated Read Value: All 0x00 00:08:50.201 Deallocate in Write Zeroes: Not Supported 00:08:50.201 Deallocated Guard Field: 0xFFFF 00:08:50.201 Flush: Supported 00:08:50.201 Reservation: Not Supported 00:08:50.201 Metadata Transferred as: Separate Metadata Buffer 00:08:50.201 Namespace Sharing Capabilities: Private 00:08:50.201 Size (in LBAs): 1548666 (5GiB) 00:08:50.201 Capacity (in LBAs): 1548666 (5GiB) 00:08:50.201 Utilization (in LBAs): 1548666 (5GiB) 00:08:50.201 Thin Provisioning: Not Supported 00:08:50.201 Per-NS Atomic Units: No 00:08:50.201 Maximum Single Source Range Length: 128 00:08:50.201 Maximum Copy Length: 128 00:08:50.201 Maximum Source Range Count: 128 00:08:50.201 NGUID/EUI64 Never Reused: No 00:08:50.201 Namespace Write Protected: No 00:08:50.201 Number of LBA Formats: 8 00:08:50.201 Current LBA Format: LBA Format #07 00:08:50.201 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.201 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.201 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.201 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.201 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.201 LBA Format[2024-10-17 10:07:53.166724] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 62991 terminated unexpected 00:08:50.201 #05: Data Size: 4096 Metadata Size: 8 00:08:50.201 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.201 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.201 00:08:50.201 NVM Specific Namespace Data 00:08:50.201 =========================== 00:08:50.201 Logical Block Storage Tag Mask: 0 00:08:50.201 Protection Information Capabilities: 00:08:50.201 16b Guard Protection Information Storage Tag Support: No 00:08:50.201 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.201 Storage Tag Check Read Support: No 00:08:50.201 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.201 ===================================================== 00:08:50.201 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.201 ===================================================== 00:08:50.201 Controller Capabilities/Features 00:08:50.201 ================================ 00:08:50.201 Vendor ID: 1b36 00:08:50.201 Subsystem Vendor ID: 1af4 00:08:50.201 Serial Number: 12342 00:08:50.201 Model Number: QEMU NVMe Ctrl 00:08:50.201 Firmware Version: 8.0.0 00:08:50.201 Recommended Arb Burst: 6 00:08:50.201 IEEE OUI Identifier: 00 54 52 00:08:50.201 Multi-path I/O 00:08:50.201 May have multiple subsystem ports: No 00:08:50.201 May have multiple controllers: No 00:08:50.201 Associated with SR-IOV VF: No 00:08:50.201 Max Data Transfer Size: 524288 00:08:50.201 Max Number of Namespaces: 256 00:08:50.201 Max Number of I/O Queues: 64 00:08:50.201 NVMe Specification Version (VS): 1.4 00:08:50.201 NVMe Specification Version (Identify): 1.4 00:08:50.201 Maximum Queue Entries: 2048 00:08:50.201 Contiguous Queues Required: Yes 00:08:50.201 Arbitration Mechanisms Supported 00:08:50.201 Weighted Round Robin: Not Supported 00:08:50.201 Vendor Specific: Not Supported 00:08:50.201 Reset Timeout: 7500 ms 00:08:50.201 Doorbell Stride: 4 bytes 00:08:50.201 NVM Subsystem Reset: Not Supported 00:08:50.201 Command Sets Supported 00:08:50.201 NVM Command Set: Supported 00:08:50.201 Boot Partition: Not Supported 00:08:50.201 Memory Page Size Minimum: 4096 bytes 00:08:50.201 Memory Page Size Maximum: 65536 bytes 00:08:50.201 Persistent Memory Region: Not Supported 00:08:50.201 Optional Asynchronous Events Supported 00:08:50.201 Namespace Attribute Notices: Supported 00:08:50.201 Firmware Activation Notices: Not Supported 00:08:50.201 ANA Change Notices: Not Supported 00:08:50.201 PLE Aggregate Log Change Notices: Not Supported 00:08:50.201 LBA Status Info Alert Notices: Not Supported 00:08:50.201 EGE Aggregate Log Change Notices: Not Supported 00:08:50.201 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.201 Zone Descriptor Change Notices: Not Supported 00:08:50.201 Discovery Log Change Notices: Not Supported 00:08:50.201 Controller Attributes 00:08:50.201 128-bit Host Identifier: Not Supported 00:08:50.201 Non-Operational Permissive Mode: Not Supported 00:08:50.201 NVM Sets: Not Supported 00:08:50.201 Read Recovery Levels: Not Supported 00:08:50.201 Endurance Groups: Not Supported 00:08:50.201 Predictable Latency Mode: Not Supported 00:08:50.201 Traffic Based Keep ALive: Not Supported 00:08:50.201 Namespace Granularity: Not Supported 00:08:50.201 SQ Associations: Not Supported 00:08:50.201 UUID List: Not Supported 00:08:50.201 Multi-Domain Subsystem: Not Supported 00:08:50.201 Fixed Capacity Management: Not Supported 00:08:50.201 Variable Capacity Management: Not Supported 00:08:50.201 Delete Endurance Group: Not Supported 00:08:50.201 Delete NVM Set: Not Supported 00:08:50.201 Extended LBA Formats Supported: Supported 00:08:50.201 Flexible Data Placement Supported: Not Supported 00:08:50.201 00:08:50.201 Controller Memory Buffer Support 00:08:50.201 ================================ 00:08:50.201 Supported: No 00:08:50.201 00:08:50.201 Persistent Memory Region Support 00:08:50.201 ================================ 00:08:50.201 Supported: No 00:08:50.201 00:08:50.201 Admin Command Set Attributes 00:08:50.201 ============================ 00:08:50.201 Security Send/Receive: Not Supported 00:08:50.201 Format NVM: Supported 00:08:50.201 Firmware Activate/Download: Not Supported 00:08:50.201 Namespace Management: Supported 00:08:50.201 Device Self-Test: Not Supported 00:08:50.201 Directives: Supported 00:08:50.201 NVMe-MI: Not Supported 00:08:50.201 Virtualization Management: Not Supported 00:08:50.201 Doorbell Buffer Config: Supported 00:08:50.201 Get LBA Status Capability: Not Supported 00:08:50.201 Command & Feature Lockdown Capability: Not Supported 00:08:50.201 Abort Command Limit: 4 00:08:50.201 Async Event Request Limit: 4 00:08:50.201 Number of Firmware Slots: N/A 00:08:50.201 Firmware Slot 1 Read-Only: N/A 00:08:50.201 Firmware Activation Without Reset: N/A 00:08:50.201 Multiple Update Detection Support: N/A 00:08:50.201 Firmware Update Granularity: No Information Provided 00:08:50.201 Per-Namespace SMART Log: Yes 00:08:50.201 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.201 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:50.201 Command Effects Log Page: Supported 00:08:50.201 Get Log Page Extended Data: Supported 00:08:50.201 Telemetry Log Pages: Not Supported 00:08:50.202 Persistent Event Log Pages: Not Supported 00:08:50.202 Supported Log Pages Log Page: May Support 00:08:50.202 Commands Supported & Effects Log Page: Not Supported 00:08:50.202 Feature Identifiers & Effects Log Page:May Support 00:08:50.202 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.202 Data Area 4 for Telemetry Log: Not Supported 00:08:50.202 Error Log Page Entries Supported: 1 00:08:50.202 Keep Alive: Not Supported 00:08:50.202 00:08:50.202 NVM Command Set Attributes 00:08:50.202 ========================== 00:08:50.202 Submission Queue Entry Size 00:08:50.202 Max: 64 00:08:50.202 Min: 64 00:08:50.202 Completion Queue Entry Size 00:08:50.202 Max: 16 00:08:50.202 Min: 16 00:08:50.202 Number of Namespaces: 256 00:08:50.202 Compare Command: Supported 00:08:50.202 Write Uncorrectable Command: Not Supported 00:08:50.202 Dataset Management Command: Supported 00:08:50.202 Write Zeroes Command: Supported 00:08:50.202 Set Features Save Field: Supported 00:08:50.202 Reservations: Not Supported 00:08:50.202 Timestamp: Supported 00:08:50.202 Copy: Supported 00:08:50.202 Volatile Write Cache: Present 00:08:50.202 Atomic Write Unit (Normal): 1 00:08:50.202 Atomic Write Unit (PFail): 1 00:08:50.202 Atomic Compare & Write Unit: 1 00:08:50.202 Fused Compare & Write: Not Supported 00:08:50.202 Scatter-Gather List 00:08:50.202 SGL Command Set: Supported 00:08:50.202 SGL Keyed: Not Supported 00:08:50.202 SGL Bit Bucket Descriptor: Not Supported 00:08:50.202 SGL Metadata Pointer: Not Supported 00:08:50.202 Oversized SGL: Not Supported 00:08:50.202 SGL Metadata Address: Not Supported 00:08:50.202 SGL Offset: Not Supported 00:08:50.202 Transport SGL Data Block: Not Supported 00:08:50.202 Replay Protected Memory Block: Not Supported 00:08:50.202 00:08:50.202 Firmware Slot Information 00:08:50.202 ========================= 00:08:50.202 Active slot: 1 00:08:50.202 Slot 1 Firmware Revision: 1.0 00:08:50.202 00:08:50.202 00:08:50.202 Commands Supported and Effects 00:08:50.202 ============================== 00:08:50.202 Admin Commands 00:08:50.202 -------------- 00:08:50.202 Delete I/O Submission Queue (00h): Supported 00:08:50.202 Create I/O Submission Queue (01h): Supported 00:08:50.202 Get Log Page (02h): Supported 00:08:50.202 Delete I/O Completion Queue (04h): Supported 00:08:50.202 Create I/O Completion Queue (05h): Supported 00:08:50.202 Identify (06h): Supported 00:08:50.202 Abort (08h): Supported 00:08:50.202 Set Features (09h): Supported 00:08:50.202 Get Features (0Ah): Supported 00:08:50.202 Asynchronous Event Request (0Ch): Supported 00:08:50.202 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.202 Directive Send (19h): Supported 00:08:50.202 Directive Receive (1Ah): Supported 00:08:50.202 Virtualization Management (1Ch): Supported 00:08:50.202 Doorbell Buffer Config (7Ch): Supported 00:08:50.202 Format NVM (80h): Supported LBA-Change 00:08:50.202 I/O Commands 00:08:50.202 ------------ 00:08:50.202 Flush (00h): Supported LBA-Change 00:08:50.202 Write (01h): Supported LBA-Change 00:08:50.202 Read (02h): Supported 00:08:50.202 Compare (05h): Supported 00:08:50.202 Write Zeroes (08h): Supported LBA-Change 00:08:50.202 Dataset Management (09h): Supported LBA-Change 00:08:50.202 Unknown (0Ch): Supported 00:08:50.202 Unknown (12h): Supported 00:08:50.202 Copy (19h): Supported LBA-Change 00:08:50.202 Unknown (1Dh): Supported LBA-Change 00:08:50.202 00:08:50.202 Error Log 00:08:50.202 ========= 00:08:50.202 00:08:50.202 Arbitration 00:08:50.202 =========== 00:08:50.202 Arbitration Burst: no limit 00:08:50.202 00:08:50.202 Power Management 00:08:50.202 ================ 00:08:50.202 Number of Power States: 1 00:08:50.202 Current Power State: Power State #0 00:08:50.202 Power State #0: 00:08:50.202 Max Power: 25.00 W 00:08:50.202 Non-Operational State: Operational 00:08:50.202 Entry Latency: 16 microseconds 00:08:50.202 Exit Latency: 4 microseconds 00:08:50.202 Relative Read Throughput: 0 00:08:50.202 Relative Read Latency: 0 00:08:50.202 Relative Write Throughput: 0 00:08:50.202 Relative Write Latency: 0 00:08:50.202 Idle Power: Not Reported 00:08:50.202 Active Power: Not Reported 00:08:50.202 Non-Operational Permissive Mode: Not Supported 00:08:50.202 00:08:50.202 Health Information 00:08:50.202 ================== 00:08:50.202 Critical Warnings: 00:08:50.202 Available Spare Space: OK 00:08:50.202 Temperature: OK 00:08:50.202 Device Reliability: OK 00:08:50.202 Read Only: No 00:08:50.202 Volatile Memory Backup: OK 00:08:50.202 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.202 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.202 Available Spare: 0% 00:08:50.202 Available Spare Threshold: 0% 00:08:50.202 Life Percentage Used: 0% 00:08:50.202 Data Units Read: 2005 00:08:50.202 Data Units Written: 1792 00:08:50.202 Host Read Commands: 95586 00:08:50.202 Host Write Commands: 93855 00:08:50.202 Controller Busy Time: 0 minutes 00:08:50.202 Power Cycles: 0 00:08:50.202 Power On Hours: 0 hours 00:08:50.202 Unsafe Shutdowns: 0 00:08:50.202 Unrecoverable Media Errors: 0 00:08:50.202 Lifetime Error Log Entries: 0 00:08:50.202 Warning Temperature Time: 0 minutes 00:08:50.202 Critical Temperature Time: 0 minutes 00:08:50.202 00:08:50.202 Number of Queues 00:08:50.202 ================ 00:08:50.202 Number of I/O Submission Queues: 64 00:08:50.202 Number of I/O Completion Queues: 64 00:08:50.202 00:08:50.202 ZNS Specific Controller Data 00:08:50.202 ============================ 00:08:50.202 Zone Append Size Limit: 0 00:08:50.202 00:08:50.202 00:08:50.202 Active Namespaces 00:08:50.202 ================= 00:08:50.202 Namespace ID:1 00:08:50.202 Error Recovery Timeout: Unlimited 00:08:50.202 Command Set Identifier: NVM (00h) 00:08:50.202 Deallocate: Supported 00:08:50.202 Deallocated/Unwritten Error: Supported 00:08:50.202 Deallocated Read Value: All 0x00 00:08:50.202 Deallocate in Write Zeroes: Not Supported 00:08:50.202 Deallocated Guard Field: 0xFFFF 00:08:50.202 Flush: Supported 00:08:50.202 Reservation: Not Supported 00:08:50.202 Namespace Sharing Capabilities: Private 00:08:50.202 Size (in LBAs): 1048576 (4GiB) 00:08:50.202 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.202 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.202 Thin Provisioning: Not Supported 00:08:50.202 Per-NS Atomic Units: No 00:08:50.202 Maximum Single Source Range Length: 128 00:08:50.202 Maximum Copy Length: 128 00:08:50.202 Maximum Source Range Count: 128 00:08:50.202 NGUID/EUI64 Never Reused: No 00:08:50.202 Namespace Write Protected: No 00:08:50.202 Number of LBA Formats: 8 00:08:50.202 Current LBA Format: LBA Format #04 00:08:50.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.202 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.202 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.202 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.202 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.202 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.202 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.202 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.202 00:08:50.202 NVM Specific Namespace Data 00:08:50.202 =========================== 00:08:50.202 Logical Block Storage Tag Mask: 0 00:08:50.202 Protection Information Capabilities: 00:08:50.202 16b Guard Protection Information Storage Tag Support: No 00:08:50.202 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.202 Storage Tag Check Read Support: No 00:08:50.202 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.202 Namespace ID:2 00:08:50.202 Error Recovery Timeout: Unlimited 00:08:50.202 Command Set Identifier: NVM (00h) 00:08:50.202 Deallocate: Supported 00:08:50.202 Deallocated/Unwritten Error: Supported 00:08:50.202 Deallocated Read Value: All 0x00 00:08:50.202 Deallocate in Write Zeroes: Not Supported 00:08:50.202 Deallocated Guard Field: 0xFFFF 00:08:50.202 Flush: Supported 00:08:50.202 Reservation: Not Supported 00:08:50.202 Namespace Sharing Capabilities: Private 00:08:50.202 Size (in LBAs): 1048576 (4GiB) 00:08:50.202 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.202 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.202 Thin Provisioning: Not Supported 00:08:50.202 Per-NS Atomic Units: No 00:08:50.202 Maximum Single Source Range Length: 128 00:08:50.202 Maximum Copy Length: 128 00:08:50.202 Maximum Source Range Count: 128 00:08:50.202 NGUID/EUI64 Never Reused: No 00:08:50.202 Namespace Write Protected: No 00:08:50.202 Number of LBA Formats: 8 00:08:50.202 Current LBA Format: LBA Format #04 00:08:50.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.202 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.203 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.203 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.203 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.203 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.203 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.203 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.203 00:08:50.203 NVM Specific Namespace Data 00:08:50.203 =========================== 00:08:50.203 Logical Block Storage Tag Mask: 0 00:08:50.203 Protection Information Capabilities: 00:08:50.203 16b Guard Protection Information Storage Tag Support: No 00:08:50.203 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.203 Storage Tag Check Read Support: No 00:08:50.203 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Namespace ID:3 00:08:50.203 Error Recovery Timeout: Unlimited 00:08:50.203 Command Set Identifier: NVM (00h) 00:08:50.203 Deallocate: Supported 00:08:50.203 Deallocated/Unwritten Error: Supported 00:08:50.203 Deallocated Read Value: All 0x00 00:08:50.203 Deallocate in Write Zeroes: Not Supported 00:08:50.203 Deallocated Guard Field: 0xFFFF 00:08:50.203 Flush: Supported 00:08:50.203 Reservation: Not Supported 00:08:50.203 Namespace Sharing Capabilities: Private 00:08:50.203 Size (in LBAs): 1048576 (4GiB) 00:08:50.203 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.203 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.203 Thin Provisioning: Not Supported 00:08:50.203 Per-NS Atomic Units: No 00:08:50.203 Maximum Single Source Range Length: 128 00:08:50.203 Maximum Copy Length: 128 00:08:50.203 Maximum Source Range Count: 128 00:08:50.203 NGUID/EUI64 Never Reused: No 00:08:50.203 Namespace Write Protected: No 00:08:50.203 Number of LBA Formats: 8 00:08:50.203 Current LBA Format: LBA Format #04 00:08:50.203 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.203 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.203 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.203 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.203 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.203 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.203 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.203 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.203 00:08:50.203 NVM Specific Namespace Data 00:08:50.203 =========================== 00:08:50.203 Logical Block Storage Tag Mask: 0 00:08:50.203 Protection Information Capabilities: 00:08:50.203 16b Guard Protection Information Storage Tag Support: No 00:08:50.203 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.203 Storage Tag Check Read Support: No 00:08:50.203 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.203 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.203 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:50.462 ===================================================== 00:08:50.462 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.462 ===================================================== 00:08:50.462 Controller Capabilities/Features 00:08:50.462 ================================ 00:08:50.462 Vendor ID: 1b36 00:08:50.462 Subsystem Vendor ID: 1af4 00:08:50.462 Serial Number: 12340 00:08:50.462 Model Number: QEMU NVMe Ctrl 00:08:50.462 Firmware Version: 8.0.0 00:08:50.462 Recommended Arb Burst: 6 00:08:50.462 IEEE OUI Identifier: 00 54 52 00:08:50.462 Multi-path I/O 00:08:50.462 May have multiple subsystem ports: No 00:08:50.462 May have multiple controllers: No 00:08:50.462 Associated with SR-IOV VF: No 00:08:50.462 Max Data Transfer Size: 524288 00:08:50.462 Max Number of Namespaces: 256 00:08:50.462 Max Number of I/O Queues: 64 00:08:50.462 NVMe Specification Version (VS): 1.4 00:08:50.462 NVMe Specification Version (Identify): 1.4 00:08:50.462 Maximum Queue Entries: 2048 00:08:50.462 Contiguous Queues Required: Yes 00:08:50.462 Arbitration Mechanisms Supported 00:08:50.462 Weighted Round Robin: Not Supported 00:08:50.462 Vendor Specific: Not Supported 00:08:50.462 Reset Timeout: 7500 ms 00:08:50.462 Doorbell Stride: 4 bytes 00:08:50.462 NVM Subsystem Reset: Not Supported 00:08:50.462 Command Sets Supported 00:08:50.462 NVM Command Set: Supported 00:08:50.462 Boot Partition: Not Supported 00:08:50.462 Memory Page Size Minimum: 4096 bytes 00:08:50.462 Memory Page Size Maximum: 65536 bytes 00:08:50.462 Persistent Memory Region: Not Supported 00:08:50.462 Optional Asynchronous Events Supported 00:08:50.462 Namespace Attribute Notices: Supported 00:08:50.462 Firmware Activation Notices: Not Supported 00:08:50.462 ANA Change Notices: Not Supported 00:08:50.462 PLE Aggregate Log Change Notices: Not Supported 00:08:50.462 LBA Status Info Alert Notices: Not Supported 00:08:50.462 EGE Aggregate Log Change Notices: Not Supported 00:08:50.462 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.462 Zone Descriptor Change Notices: Not Supported 00:08:50.462 Discovery Log Change Notices: Not Supported 00:08:50.462 Controller Attributes 00:08:50.462 128-bit Host Identifier: Not Supported 00:08:50.462 Non-Operational Permissive Mode: Not Supported 00:08:50.462 NVM Sets: Not Supported 00:08:50.462 Read Recovery Levels: Not Supported 00:08:50.462 Endurance Groups: Not Supported 00:08:50.462 Predictable Latency Mode: Not Supported 00:08:50.462 Traffic Based Keep ALive: Not Supported 00:08:50.462 Namespace Granularity: Not Supported 00:08:50.462 SQ Associations: Not Supported 00:08:50.462 UUID List: Not Supported 00:08:50.462 Multi-Domain Subsystem: Not Supported 00:08:50.462 Fixed Capacity Management: Not Supported 00:08:50.462 Variable Capacity Management: Not Supported 00:08:50.462 Delete Endurance Group: Not Supported 00:08:50.462 Delete NVM Set: Not Supported 00:08:50.462 Extended LBA Formats Supported: Supported 00:08:50.462 Flexible Data Placement Supported: Not Supported 00:08:50.462 00:08:50.462 Controller Memory Buffer Support 00:08:50.462 ================================ 00:08:50.462 Supported: No 00:08:50.462 00:08:50.462 Persistent Memory Region Support 00:08:50.462 ================================ 00:08:50.462 Supported: No 00:08:50.462 00:08:50.462 Admin Command Set Attributes 00:08:50.462 ============================ 00:08:50.462 Security Send/Receive: Not Supported 00:08:50.462 Format NVM: Supported 00:08:50.462 Firmware Activate/Download: Not Supported 00:08:50.462 Namespace Management: Supported 00:08:50.462 Device Self-Test: Not Supported 00:08:50.462 Directives: Supported 00:08:50.462 NVMe-MI: Not Supported 00:08:50.462 Virtualization Management: Not Supported 00:08:50.462 Doorbell Buffer Config: Supported 00:08:50.462 Get LBA Status Capability: Not Supported 00:08:50.462 Command & Feature Lockdown Capability: Not Supported 00:08:50.462 Abort Command Limit: 4 00:08:50.462 Async Event Request Limit: 4 00:08:50.462 Number of Firmware Slots: N/A 00:08:50.462 Firmware Slot 1 Read-Only: N/A 00:08:50.462 Firmware Activation Without Reset: N/A 00:08:50.462 Multiple Update Detection Support: N/A 00:08:50.462 Firmware Update Granularity: No Information Provided 00:08:50.462 Per-Namespace SMART Log: Yes 00:08:50.462 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.462 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:50.462 Command Effects Log Page: Supported 00:08:50.462 Get Log Page Extended Data: Supported 00:08:50.462 Telemetry Log Pages: Not Supported 00:08:50.462 Persistent Event Log Pages: Not Supported 00:08:50.462 Supported Log Pages Log Page: May Support 00:08:50.463 Commands Supported & Effects Log Page: Not Supported 00:08:50.463 Feature Identifiers & Effects Log Page:May Support 00:08:50.463 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.463 Data Area 4 for Telemetry Log: Not Supported 00:08:50.463 Error Log Page Entries Supported: 1 00:08:50.463 Keep Alive: Not Supported 00:08:50.463 00:08:50.463 NVM Command Set Attributes 00:08:50.463 ========================== 00:08:50.463 Submission Queue Entry Size 00:08:50.463 Max: 64 00:08:50.463 Min: 64 00:08:50.463 Completion Queue Entry Size 00:08:50.463 Max: 16 00:08:50.463 Min: 16 00:08:50.463 Number of Namespaces: 256 00:08:50.463 Compare Command: Supported 00:08:50.463 Write Uncorrectable Command: Not Supported 00:08:50.463 Dataset Management Command: Supported 00:08:50.463 Write Zeroes Command: Supported 00:08:50.463 Set Features Save Field: Supported 00:08:50.463 Reservations: Not Supported 00:08:50.463 Timestamp: Supported 00:08:50.463 Copy: Supported 00:08:50.463 Volatile Write Cache: Present 00:08:50.463 Atomic Write Unit (Normal): 1 00:08:50.463 Atomic Write Unit (PFail): 1 00:08:50.463 Atomic Compare & Write Unit: 1 00:08:50.463 Fused Compare & Write: Not Supported 00:08:50.463 Scatter-Gather List 00:08:50.463 SGL Command Set: Supported 00:08:50.463 SGL Keyed: Not Supported 00:08:50.463 SGL Bit Bucket Descriptor: Not Supported 00:08:50.463 SGL Metadata Pointer: Not Supported 00:08:50.463 Oversized SGL: Not Supported 00:08:50.463 SGL Metadata Address: Not Supported 00:08:50.463 SGL Offset: Not Supported 00:08:50.463 Transport SGL Data Block: Not Supported 00:08:50.463 Replay Protected Memory Block: Not Supported 00:08:50.463 00:08:50.463 Firmware Slot Information 00:08:50.463 ========================= 00:08:50.463 Active slot: 1 00:08:50.463 Slot 1 Firmware Revision: 1.0 00:08:50.463 00:08:50.463 00:08:50.463 Commands Supported and Effects 00:08:50.463 ============================== 00:08:50.463 Admin Commands 00:08:50.463 -------------- 00:08:50.463 Delete I/O Submission Queue (00h): Supported 00:08:50.463 Create I/O Submission Queue (01h): Supported 00:08:50.463 Get Log Page (02h): Supported 00:08:50.463 Delete I/O Completion Queue (04h): Supported 00:08:50.463 Create I/O Completion Queue (05h): Supported 00:08:50.463 Identify (06h): Supported 00:08:50.463 Abort (08h): Supported 00:08:50.463 Set Features (09h): Supported 00:08:50.463 Get Features (0Ah): Supported 00:08:50.463 Asynchronous Event Request (0Ch): Supported 00:08:50.463 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.463 Directive Send (19h): Supported 00:08:50.463 Directive Receive (1Ah): Supported 00:08:50.463 Virtualization Management (1Ch): Supported 00:08:50.463 Doorbell Buffer Config (7Ch): Supported 00:08:50.463 Format NVM (80h): Supported LBA-Change 00:08:50.463 I/O Commands 00:08:50.463 ------------ 00:08:50.463 Flush (00h): Supported LBA-Change 00:08:50.463 Write (01h): Supported LBA-Change 00:08:50.463 Read (02h): Supported 00:08:50.463 Compare (05h): Supported 00:08:50.463 Write Zeroes (08h): Supported LBA-Change 00:08:50.463 Dataset Management (09h): Supported LBA-Change 00:08:50.463 Unknown (0Ch): Supported 00:08:50.463 Unknown (12h): Supported 00:08:50.463 Copy (19h): Supported LBA-Change 00:08:50.463 Unknown (1Dh): Supported LBA-Change 00:08:50.463 00:08:50.463 Error Log 00:08:50.463 ========= 00:08:50.463 00:08:50.463 Arbitration 00:08:50.463 =========== 00:08:50.463 Arbitration Burst: no limit 00:08:50.463 00:08:50.463 Power Management 00:08:50.463 ================ 00:08:50.463 Number of Power States: 1 00:08:50.463 Current Power State: Power State #0 00:08:50.463 Power State #0: 00:08:50.463 Max Power: 25.00 W 00:08:50.463 Non-Operational State: Operational 00:08:50.463 Entry Latency: 16 microseconds 00:08:50.463 Exit Latency: 4 microseconds 00:08:50.463 Relative Read Throughput: 0 00:08:50.463 Relative Read Latency: 0 00:08:50.463 Relative Write Throughput: 0 00:08:50.463 Relative Write Latency: 0 00:08:50.463 Idle Power: Not Reported 00:08:50.463 Active Power: Not Reported 00:08:50.463 Non-Operational Permissive Mode: Not Supported 00:08:50.463 00:08:50.463 Health Information 00:08:50.463 ================== 00:08:50.463 Critical Warnings: 00:08:50.463 Available Spare Space: OK 00:08:50.463 Temperature: OK 00:08:50.463 Device Reliability: OK 00:08:50.463 Read Only: No 00:08:50.463 Volatile Memory Backup: OK 00:08:50.463 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.463 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.463 Available Spare: 0% 00:08:50.463 Available Spare Threshold: 0% 00:08:50.463 Life Percentage Used: 0% 00:08:50.463 Data Units Read: 631 00:08:50.463 Data Units Written: 559 00:08:50.463 Host Read Commands: 31226 00:08:50.463 Host Write Commands: 31012 00:08:50.463 Controller Busy Time: 0 minutes 00:08:50.463 Power Cycles: 0 00:08:50.463 Power On Hours: 0 hours 00:08:50.463 Unsafe Shutdowns: 0 00:08:50.463 Unrecoverable Media Errors: 0 00:08:50.463 Lifetime Error Log Entries: 0 00:08:50.463 Warning Temperature Time: 0 minutes 00:08:50.463 Critical Temperature Time: 0 minutes 00:08:50.463 00:08:50.463 Number of Queues 00:08:50.463 ================ 00:08:50.463 Number of I/O Submission Queues: 64 00:08:50.463 Number of I/O Completion Queues: 64 00:08:50.463 00:08:50.463 ZNS Specific Controller Data 00:08:50.463 ============================ 00:08:50.463 Zone Append Size Limit: 0 00:08:50.463 00:08:50.463 00:08:50.463 Active Namespaces 00:08:50.463 ================= 00:08:50.463 Namespace ID:1 00:08:50.463 Error Recovery Timeout: Unlimited 00:08:50.463 Command Set Identifier: NVM (00h) 00:08:50.463 Deallocate: Supported 00:08:50.463 Deallocated/Unwritten Error: Supported 00:08:50.463 Deallocated Read Value: All 0x00 00:08:50.463 Deallocate in Write Zeroes: Not Supported 00:08:50.463 Deallocated Guard Field: 0xFFFF 00:08:50.463 Flush: Supported 00:08:50.463 Reservation: Not Supported 00:08:50.463 Metadata Transferred as: Separate Metadata Buffer 00:08:50.463 Namespace Sharing Capabilities: Private 00:08:50.463 Size (in LBAs): 1548666 (5GiB) 00:08:50.463 Capacity (in LBAs): 1548666 (5GiB) 00:08:50.463 Utilization (in LBAs): 1548666 (5GiB) 00:08:50.463 Thin Provisioning: Not Supported 00:08:50.463 Per-NS Atomic Units: No 00:08:50.463 Maximum Single Source Range Length: 128 00:08:50.463 Maximum Copy Length: 128 00:08:50.463 Maximum Source Range Count: 128 00:08:50.463 NGUID/EUI64 Never Reused: No 00:08:50.463 Namespace Write Protected: No 00:08:50.463 Number of LBA Formats: 8 00:08:50.463 Current LBA Format: LBA Format #07 00:08:50.463 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.463 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.463 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.463 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.463 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.463 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.463 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.463 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.463 00:08:50.463 NVM Specific Namespace Data 00:08:50.463 =========================== 00:08:50.463 Logical Block Storage Tag Mask: 0 00:08:50.463 Protection Information Capabilities: 00:08:50.463 16b Guard Protection Information Storage Tag Support: No 00:08:50.463 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.463 Storage Tag Check Read Support: No 00:08:50.463 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.463 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.463 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:50.722 ===================================================== 00:08:50.723 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.723 ===================================================== 00:08:50.723 Controller Capabilities/Features 00:08:50.723 ================================ 00:08:50.723 Vendor ID: 1b36 00:08:50.723 Subsystem Vendor ID: 1af4 00:08:50.723 Serial Number: 12341 00:08:50.723 Model Number: QEMU NVMe Ctrl 00:08:50.723 Firmware Version: 8.0.0 00:08:50.723 Recommended Arb Burst: 6 00:08:50.723 IEEE OUI Identifier: 00 54 52 00:08:50.723 Multi-path I/O 00:08:50.723 May have multiple subsystem ports: No 00:08:50.723 May have multiple controllers: No 00:08:50.723 Associated with SR-IOV VF: No 00:08:50.723 Max Data Transfer Size: 524288 00:08:50.723 Max Number of Namespaces: 256 00:08:50.723 Max Number of I/O Queues: 64 00:08:50.723 NVMe Specification Version (VS): 1.4 00:08:50.723 NVMe Specification Version (Identify): 1.4 00:08:50.723 Maximum Queue Entries: 2048 00:08:50.723 Contiguous Queues Required: Yes 00:08:50.723 Arbitration Mechanisms Supported 00:08:50.723 Weighted Round Robin: Not Supported 00:08:50.723 Vendor Specific: Not Supported 00:08:50.723 Reset Timeout: 7500 ms 00:08:50.723 Doorbell Stride: 4 bytes 00:08:50.723 NVM Subsystem Reset: Not Supported 00:08:50.723 Command Sets Supported 00:08:50.723 NVM Command Set: Supported 00:08:50.723 Boot Partition: Not Supported 00:08:50.723 Memory Page Size Minimum: 4096 bytes 00:08:50.723 Memory Page Size Maximum: 65536 bytes 00:08:50.723 Persistent Memory Region: Not Supported 00:08:50.723 Optional Asynchronous Events Supported 00:08:50.723 Namespace Attribute Notices: Supported 00:08:50.723 Firmware Activation Notices: Not Supported 00:08:50.723 ANA Change Notices: Not Supported 00:08:50.723 PLE Aggregate Log Change Notices: Not Supported 00:08:50.723 LBA Status Info Alert Notices: Not Supported 00:08:50.723 EGE Aggregate Log Change Notices: Not Supported 00:08:50.723 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.723 Zone Descriptor Change Notices: Not Supported 00:08:50.723 Discovery Log Change Notices: Not Supported 00:08:50.723 Controller Attributes 00:08:50.723 128-bit Host Identifier: Not Supported 00:08:50.723 Non-Operational Permissive Mode: Not Supported 00:08:50.723 NVM Sets: Not Supported 00:08:50.723 Read Recovery Levels: Not Supported 00:08:50.723 Endurance Groups: Not Supported 00:08:50.723 Predictable Latency Mode: Not Supported 00:08:50.723 Traffic Based Keep ALive: Not Supported 00:08:50.723 Namespace Granularity: Not Supported 00:08:50.723 SQ Associations: Not Supported 00:08:50.723 UUID List: Not Supported 00:08:50.723 Multi-Domain Subsystem: Not Supported 00:08:50.723 Fixed Capacity Management: Not Supported 00:08:50.723 Variable Capacity Management: Not Supported 00:08:50.723 Delete Endurance Group: Not Supported 00:08:50.723 Delete NVM Set: Not Supported 00:08:50.723 Extended LBA Formats Supported: Supported 00:08:50.723 Flexible Data Placement Supported: Not Supported 00:08:50.723 00:08:50.723 Controller Memory Buffer Support 00:08:50.723 ================================ 00:08:50.723 Supported: No 00:08:50.723 00:08:50.723 Persistent Memory Region Support 00:08:50.723 ================================ 00:08:50.723 Supported: No 00:08:50.723 00:08:50.723 Admin Command Set Attributes 00:08:50.723 ============================ 00:08:50.723 Security Send/Receive: Not Supported 00:08:50.723 Format NVM: Supported 00:08:50.723 Firmware Activate/Download: Not Supported 00:08:50.723 Namespace Management: Supported 00:08:50.723 Device Self-Test: Not Supported 00:08:50.723 Directives: Supported 00:08:50.723 NVMe-MI: Not Supported 00:08:50.723 Virtualization Management: Not Supported 00:08:50.723 Doorbell Buffer Config: Supported 00:08:50.723 Get LBA Status Capability: Not Supported 00:08:50.723 Command & Feature Lockdown Capability: Not Supported 00:08:50.723 Abort Command Limit: 4 00:08:50.723 Async Event Request Limit: 4 00:08:50.723 Number of Firmware Slots: N/A 00:08:50.723 Firmware Slot 1 Read-Only: N/A 00:08:50.723 Firmware Activation Without Reset: N/A 00:08:50.723 Multiple Update Detection Support: N/A 00:08:50.723 Firmware Update Granularity: No Information Provided 00:08:50.723 Per-Namespace SMART Log: Yes 00:08:50.723 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.723 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:50.723 Command Effects Log Page: Supported 00:08:50.723 Get Log Page Extended Data: Supported 00:08:50.723 Telemetry Log Pages: Not Supported 00:08:50.723 Persistent Event Log Pages: Not Supported 00:08:50.723 Supported Log Pages Log Page: May Support 00:08:50.723 Commands Supported & Effects Log Page: Not Supported 00:08:50.723 Feature Identifiers & Effects Log Page:May Support 00:08:50.723 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.723 Data Area 4 for Telemetry Log: Not Supported 00:08:50.723 Error Log Page Entries Supported: 1 00:08:50.723 Keep Alive: Not Supported 00:08:50.723 00:08:50.723 NVM Command Set Attributes 00:08:50.723 ========================== 00:08:50.723 Submission Queue Entry Size 00:08:50.723 Max: 64 00:08:50.723 Min: 64 00:08:50.723 Completion Queue Entry Size 00:08:50.723 Max: 16 00:08:50.723 Min: 16 00:08:50.723 Number of Namespaces: 256 00:08:50.723 Compare Command: Supported 00:08:50.723 Write Uncorrectable Command: Not Supported 00:08:50.723 Dataset Management Command: Supported 00:08:50.723 Write Zeroes Command: Supported 00:08:50.723 Set Features Save Field: Supported 00:08:50.723 Reservations: Not Supported 00:08:50.723 Timestamp: Supported 00:08:50.723 Copy: Supported 00:08:50.723 Volatile Write Cache: Present 00:08:50.723 Atomic Write Unit (Normal): 1 00:08:50.723 Atomic Write Unit (PFail): 1 00:08:50.723 Atomic Compare & Write Unit: 1 00:08:50.723 Fused Compare & Write: Not Supported 00:08:50.723 Scatter-Gather List 00:08:50.723 SGL Command Set: Supported 00:08:50.723 SGL Keyed: Not Supported 00:08:50.723 SGL Bit Bucket Descriptor: Not Supported 00:08:50.723 SGL Metadata Pointer: Not Supported 00:08:50.723 Oversized SGL: Not Supported 00:08:50.723 SGL Metadata Address: Not Supported 00:08:50.723 SGL Offset: Not Supported 00:08:50.723 Transport SGL Data Block: Not Supported 00:08:50.723 Replay Protected Memory Block: Not Supported 00:08:50.723 00:08:50.723 Firmware Slot Information 00:08:50.723 ========================= 00:08:50.723 Active slot: 1 00:08:50.723 Slot 1 Firmware Revision: 1.0 00:08:50.723 00:08:50.723 00:08:50.723 Commands Supported and Effects 00:08:50.723 ============================== 00:08:50.723 Admin Commands 00:08:50.723 -------------- 00:08:50.723 Delete I/O Submission Queue (00h): Supported 00:08:50.723 Create I/O Submission Queue (01h): Supported 00:08:50.723 Get Log Page (02h): Supported 00:08:50.723 Delete I/O Completion Queue (04h): Supported 00:08:50.723 Create I/O Completion Queue (05h): Supported 00:08:50.723 Identify (06h): Supported 00:08:50.723 Abort (08h): Supported 00:08:50.723 Set Features (09h): Supported 00:08:50.723 Get Features (0Ah): Supported 00:08:50.723 Asynchronous Event Request (0Ch): Supported 00:08:50.723 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.723 Directive Send (19h): Supported 00:08:50.723 Directive Receive (1Ah): Supported 00:08:50.723 Virtualization Management (1Ch): Supported 00:08:50.723 Doorbell Buffer Config (7Ch): Supported 00:08:50.723 Format NVM (80h): Supported LBA-Change 00:08:50.723 I/O Commands 00:08:50.723 ------------ 00:08:50.723 Flush (00h): Supported LBA-Change 00:08:50.723 Write (01h): Supported LBA-Change 00:08:50.723 Read (02h): Supported 00:08:50.723 Compare (05h): Supported 00:08:50.723 Write Zeroes (08h): Supported LBA-Change 00:08:50.723 Dataset Management (09h): Supported LBA-Change 00:08:50.723 Unknown (0Ch): Supported 00:08:50.723 Unknown (12h): Supported 00:08:50.723 Copy (19h): Supported LBA-Change 00:08:50.723 Unknown (1Dh): Supported LBA-Change 00:08:50.723 00:08:50.723 Error Log 00:08:50.723 ========= 00:08:50.723 00:08:50.723 Arbitration 00:08:50.723 =========== 00:08:50.723 Arbitration Burst: no limit 00:08:50.723 00:08:50.723 Power Management 00:08:50.723 ================ 00:08:50.723 Number of Power States: 1 00:08:50.723 Current Power State: Power State #0 00:08:50.723 Power State #0: 00:08:50.723 Max Power: 25.00 W 00:08:50.723 Non-Operational State: Operational 00:08:50.723 Entry Latency: 16 microseconds 00:08:50.723 Exit Latency: 4 microseconds 00:08:50.723 Relative Read Throughput: 0 00:08:50.723 Relative Read Latency: 0 00:08:50.723 Relative Write Throughput: 0 00:08:50.723 Relative Write Latency: 0 00:08:50.723 Idle Power: Not Reported 00:08:50.723 Active Power: Not Reported 00:08:50.723 Non-Operational Permissive Mode: Not Supported 00:08:50.723 00:08:50.723 Health Information 00:08:50.723 ================== 00:08:50.723 Critical Warnings: 00:08:50.723 Available Spare Space: OK 00:08:50.723 Temperature: OK 00:08:50.723 Device Reliability: OK 00:08:50.723 Read Only: No 00:08:50.723 Volatile Memory Backup: OK 00:08:50.723 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.724 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.724 Available Spare: 0% 00:08:50.724 Available Spare Threshold: 0% 00:08:50.724 Life Percentage Used: 0% 00:08:50.724 Data Units Read: 967 00:08:50.724 Data Units Written: 838 00:08:50.724 Host Read Commands: 46095 00:08:50.724 Host Write Commands: 44959 00:08:50.724 Controller Busy Time: 0 minutes 00:08:50.724 Power Cycles: 0 00:08:50.724 Power On Hours: 0 hours 00:08:50.724 Unsafe Shutdowns: 0 00:08:50.724 Unrecoverable Media Errors: 0 00:08:50.724 Lifetime Error Log Entries: 0 00:08:50.724 Warning Temperature Time: 0 minutes 00:08:50.724 Critical Temperature Time: 0 minutes 00:08:50.724 00:08:50.724 Number of Queues 00:08:50.724 ================ 00:08:50.724 Number of I/O Submission Queues: 64 00:08:50.724 Number of I/O Completion Queues: 64 00:08:50.724 00:08:50.724 ZNS Specific Controller Data 00:08:50.724 ============================ 00:08:50.724 Zone Append Size Limit: 0 00:08:50.724 00:08:50.724 00:08:50.724 Active Namespaces 00:08:50.724 ================= 00:08:50.724 Namespace ID:1 00:08:50.724 Error Recovery Timeout: Unlimited 00:08:50.724 Command Set Identifier: NVM (00h) 00:08:50.724 Deallocate: Supported 00:08:50.724 Deallocated/Unwritten Error: Supported 00:08:50.724 Deallocated Read Value: All 0x00 00:08:50.724 Deallocate in Write Zeroes: Not Supported 00:08:50.724 Deallocated Guard Field: 0xFFFF 00:08:50.724 Flush: Supported 00:08:50.724 Reservation: Not Supported 00:08:50.724 Namespace Sharing Capabilities: Private 00:08:50.724 Size (in LBAs): 1310720 (5GiB) 00:08:50.724 Capacity (in LBAs): 1310720 (5GiB) 00:08:50.724 Utilization (in LBAs): 1310720 (5GiB) 00:08:50.724 Thin Provisioning: Not Supported 00:08:50.724 Per-NS Atomic Units: No 00:08:50.724 Maximum Single Source Range Length: 128 00:08:50.724 Maximum Copy Length: 128 00:08:50.724 Maximum Source Range Count: 128 00:08:50.724 NGUID/EUI64 Never Reused: No 00:08:50.724 Namespace Write Protected: No 00:08:50.724 Number of LBA Formats: 8 00:08:50.724 Current LBA Format: LBA Format #04 00:08:50.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.724 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.724 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.724 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.724 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.724 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.724 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.724 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.724 00:08:50.724 NVM Specific Namespace Data 00:08:50.724 =========================== 00:08:50.724 Logical Block Storage Tag Mask: 0 00:08:50.724 Protection Information Capabilities: 00:08:50.724 16b Guard Protection Information Storage Tag Support: No 00:08:50.724 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.724 Storage Tag Check Read Support: No 00:08:50.724 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.724 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.724 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:50.983 ===================================================== 00:08:50.984 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.984 ===================================================== 00:08:50.984 Controller Capabilities/Features 00:08:50.984 ================================ 00:08:50.984 Vendor ID: 1b36 00:08:50.984 Subsystem Vendor ID: 1af4 00:08:50.984 Serial Number: 12342 00:08:50.984 Model Number: QEMU NVMe Ctrl 00:08:50.984 Firmware Version: 8.0.0 00:08:50.984 Recommended Arb Burst: 6 00:08:50.984 IEEE OUI Identifier: 00 54 52 00:08:50.984 Multi-path I/O 00:08:50.984 May have multiple subsystem ports: No 00:08:50.984 May have multiple controllers: No 00:08:50.984 Associated with SR-IOV VF: No 00:08:50.984 Max Data Transfer Size: 524288 00:08:50.984 Max Number of Namespaces: 256 00:08:50.984 Max Number of I/O Queues: 64 00:08:50.984 NVMe Specification Version (VS): 1.4 00:08:50.984 NVMe Specification Version (Identify): 1.4 00:08:50.984 Maximum Queue Entries: 2048 00:08:50.984 Contiguous Queues Required: Yes 00:08:50.984 Arbitration Mechanisms Supported 00:08:50.984 Weighted Round Robin: Not Supported 00:08:50.984 Vendor Specific: Not Supported 00:08:50.984 Reset Timeout: 7500 ms 00:08:50.984 Doorbell Stride: 4 bytes 00:08:50.984 NVM Subsystem Reset: Not Supported 00:08:50.984 Command Sets Supported 00:08:50.984 NVM Command Set: Supported 00:08:50.984 Boot Partition: Not Supported 00:08:50.984 Memory Page Size Minimum: 4096 bytes 00:08:50.984 Memory Page Size Maximum: 65536 bytes 00:08:50.984 Persistent Memory Region: Not Supported 00:08:50.984 Optional Asynchronous Events Supported 00:08:50.984 Namespace Attribute Notices: Supported 00:08:50.984 Firmware Activation Notices: Not Supported 00:08:50.984 ANA Change Notices: Not Supported 00:08:50.984 PLE Aggregate Log Change Notices: Not Supported 00:08:50.984 LBA Status Info Alert Notices: Not Supported 00:08:50.984 EGE Aggregate Log Change Notices: Not Supported 00:08:50.984 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.984 Zone Descriptor Change Notices: Not Supported 00:08:50.984 Discovery Log Change Notices: Not Supported 00:08:50.984 Controller Attributes 00:08:50.984 128-bit Host Identifier: Not Supported 00:08:50.984 Non-Operational Permissive Mode: Not Supported 00:08:50.984 NVM Sets: Not Supported 00:08:50.984 Read Recovery Levels: Not Supported 00:08:50.984 Endurance Groups: Not Supported 00:08:50.984 Predictable Latency Mode: Not Supported 00:08:50.984 Traffic Based Keep ALive: Not Supported 00:08:50.984 Namespace Granularity: Not Supported 00:08:50.984 SQ Associations: Not Supported 00:08:50.984 UUID List: Not Supported 00:08:50.984 Multi-Domain Subsystem: Not Supported 00:08:50.984 Fixed Capacity Management: Not Supported 00:08:50.984 Variable Capacity Management: Not Supported 00:08:50.984 Delete Endurance Group: Not Supported 00:08:50.984 Delete NVM Set: Not Supported 00:08:50.984 Extended LBA Formats Supported: Supported 00:08:50.984 Flexible Data Placement Supported: Not Supported 00:08:50.984 00:08:50.984 Controller Memory Buffer Support 00:08:50.984 ================================ 00:08:50.984 Supported: No 00:08:50.984 00:08:50.984 Persistent Memory Region Support 00:08:50.984 ================================ 00:08:50.984 Supported: No 00:08:50.984 00:08:50.984 Admin Command Set Attributes 00:08:50.984 ============================ 00:08:50.984 Security Send/Receive: Not Supported 00:08:50.984 Format NVM: Supported 00:08:50.984 Firmware Activate/Download: Not Supported 00:08:50.984 Namespace Management: Supported 00:08:50.984 Device Self-Test: Not Supported 00:08:50.984 Directives: Supported 00:08:50.984 NVMe-MI: Not Supported 00:08:50.984 Virtualization Management: Not Supported 00:08:50.984 Doorbell Buffer Config: Supported 00:08:50.984 Get LBA Status Capability: Not Supported 00:08:50.984 Command & Feature Lockdown Capability: Not Supported 00:08:50.984 Abort Command Limit: 4 00:08:50.984 Async Event Request Limit: 4 00:08:50.984 Number of Firmware Slots: N/A 00:08:50.984 Firmware Slot 1 Read-Only: N/A 00:08:50.984 Firmware Activation Without Reset: N/A 00:08:50.984 Multiple Update Detection Support: N/A 00:08:50.984 Firmware Update Granularity: No Information Provided 00:08:50.984 Per-Namespace SMART Log: Yes 00:08:50.984 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.984 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:50.984 Command Effects Log Page: Supported 00:08:50.984 Get Log Page Extended Data: Supported 00:08:50.984 Telemetry Log Pages: Not Supported 00:08:50.984 Persistent Event Log Pages: Not Supported 00:08:50.984 Supported Log Pages Log Page: May Support 00:08:50.984 Commands Supported & Effects Log Page: Not Supported 00:08:50.984 Feature Identifiers & Effects Log Page:May Support 00:08:50.984 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.984 Data Area 4 for Telemetry Log: Not Supported 00:08:50.984 Error Log Page Entries Supported: 1 00:08:50.984 Keep Alive: Not Supported 00:08:50.984 00:08:50.984 NVM Command Set Attributes 00:08:50.984 ========================== 00:08:50.984 Submission Queue Entry Size 00:08:50.984 Max: 64 00:08:50.984 Min: 64 00:08:50.984 Completion Queue Entry Size 00:08:50.984 Max: 16 00:08:50.984 Min: 16 00:08:50.984 Number of Namespaces: 256 00:08:50.984 Compare Command: Supported 00:08:50.984 Write Uncorrectable Command: Not Supported 00:08:50.984 Dataset Management Command: Supported 00:08:50.984 Write Zeroes Command: Supported 00:08:50.984 Set Features Save Field: Supported 00:08:50.984 Reservations: Not Supported 00:08:50.984 Timestamp: Supported 00:08:50.984 Copy: Supported 00:08:50.984 Volatile Write Cache: Present 00:08:50.984 Atomic Write Unit (Normal): 1 00:08:50.984 Atomic Write Unit (PFail): 1 00:08:50.984 Atomic Compare & Write Unit: 1 00:08:50.984 Fused Compare & Write: Not Supported 00:08:50.984 Scatter-Gather List 00:08:50.984 SGL Command Set: Supported 00:08:50.984 SGL Keyed: Not Supported 00:08:50.984 SGL Bit Bucket Descriptor: Not Supported 00:08:50.984 SGL Metadata Pointer: Not Supported 00:08:50.984 Oversized SGL: Not Supported 00:08:50.984 SGL Metadata Address: Not Supported 00:08:50.984 SGL Offset: Not Supported 00:08:50.984 Transport SGL Data Block: Not Supported 00:08:50.984 Replay Protected Memory Block: Not Supported 00:08:50.984 00:08:50.984 Firmware Slot Information 00:08:50.984 ========================= 00:08:50.984 Active slot: 1 00:08:50.984 Slot 1 Firmware Revision: 1.0 00:08:50.984 00:08:50.984 00:08:50.984 Commands Supported and Effects 00:08:50.984 ============================== 00:08:50.984 Admin Commands 00:08:50.984 -------------- 00:08:50.984 Delete I/O Submission Queue (00h): Supported 00:08:50.984 Create I/O Submission Queue (01h): Supported 00:08:50.984 Get Log Page (02h): Supported 00:08:50.984 Delete I/O Completion Queue (04h): Supported 00:08:50.984 Create I/O Completion Queue (05h): Supported 00:08:50.984 Identify (06h): Supported 00:08:50.984 Abort (08h): Supported 00:08:50.984 Set Features (09h): Supported 00:08:50.984 Get Features (0Ah): Supported 00:08:50.984 Asynchronous Event Request (0Ch): Supported 00:08:50.984 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.984 Directive Send (19h): Supported 00:08:50.984 Directive Receive (1Ah): Supported 00:08:50.984 Virtualization Management (1Ch): Supported 00:08:50.984 Doorbell Buffer Config (7Ch): Supported 00:08:50.984 Format NVM (80h): Supported LBA-Change 00:08:50.984 I/O Commands 00:08:50.984 ------------ 00:08:50.984 Flush (00h): Supported LBA-Change 00:08:50.984 Write (01h): Supported LBA-Change 00:08:50.984 Read (02h): Supported 00:08:50.984 Compare (05h): Supported 00:08:50.984 Write Zeroes (08h): Supported LBA-Change 00:08:50.984 Dataset Management (09h): Supported LBA-Change 00:08:50.984 Unknown (0Ch): Supported 00:08:50.984 Unknown (12h): Supported 00:08:50.984 Copy (19h): Supported LBA-Change 00:08:50.984 Unknown (1Dh): Supported LBA-Change 00:08:50.984 00:08:50.984 Error Log 00:08:50.984 ========= 00:08:50.984 00:08:50.984 Arbitration 00:08:50.984 =========== 00:08:50.984 Arbitration Burst: no limit 00:08:50.984 00:08:50.984 Power Management 00:08:50.984 ================ 00:08:50.984 Number of Power States: 1 00:08:50.984 Current Power State: Power State #0 00:08:50.984 Power State #0: 00:08:50.984 Max Power: 25.00 W 00:08:50.984 Non-Operational State: Operational 00:08:50.984 Entry Latency: 16 microseconds 00:08:50.984 Exit Latency: 4 microseconds 00:08:50.984 Relative Read Throughput: 0 00:08:50.984 Relative Read Latency: 0 00:08:50.984 Relative Write Throughput: 0 00:08:50.984 Relative Write Latency: 0 00:08:50.984 Idle Power: Not Reported 00:08:50.984 Active Power: Not Reported 00:08:50.984 Non-Operational Permissive Mode: Not Supported 00:08:50.984 00:08:50.984 Health Information 00:08:50.984 ================== 00:08:50.984 Critical Warnings: 00:08:50.984 Available Spare Space: OK 00:08:50.984 Temperature: OK 00:08:50.985 Device Reliability: OK 00:08:50.985 Read Only: No 00:08:50.985 Volatile Memory Backup: OK 00:08:50.985 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.985 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.985 Available Spare: 0% 00:08:50.985 Available Spare Threshold: 0% 00:08:50.985 Life Percentage Used: 0% 00:08:50.985 Data Units Read: 2005 00:08:50.985 Data Units Written: 1792 00:08:50.985 Host Read Commands: 95586 00:08:50.985 Host Write Commands: 93855 00:08:50.985 Controller Busy Time: 0 minutes 00:08:50.985 Power Cycles: 0 00:08:50.985 Power On Hours: 0 hours 00:08:50.985 Unsafe Shutdowns: 0 00:08:50.985 Unrecoverable Media Errors: 0 00:08:50.985 Lifetime Error Log Entries: 0 00:08:50.985 Warning Temperature Time: 0 minutes 00:08:50.985 Critical Temperature Time: 0 minutes 00:08:50.985 00:08:50.985 Number of Queues 00:08:50.985 ================ 00:08:50.985 Number of I/O Submission Queues: 64 00:08:50.985 Number of I/O Completion Queues: 64 00:08:50.985 00:08:50.985 ZNS Specific Controller Data 00:08:50.985 ============================ 00:08:50.985 Zone Append Size Limit: 0 00:08:50.985 00:08:50.985 00:08:50.985 Active Namespaces 00:08:50.985 ================= 00:08:50.985 Namespace ID:1 00:08:50.985 Error Recovery Timeout: Unlimited 00:08:50.985 Command Set Identifier: NVM (00h) 00:08:50.985 Deallocate: Supported 00:08:50.985 Deallocated/Unwritten Error: Supported 00:08:50.985 Deallocated Read Value: All 0x00 00:08:50.985 Deallocate in Write Zeroes: Not Supported 00:08:50.985 Deallocated Guard Field: 0xFFFF 00:08:50.985 Flush: Supported 00:08:50.985 Reservation: Not Supported 00:08:50.985 Namespace Sharing Capabilities: Private 00:08:50.985 Size (in LBAs): 1048576 (4GiB) 00:08:50.985 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.985 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.985 Thin Provisioning: Not Supported 00:08:50.985 Per-NS Atomic Units: No 00:08:50.985 Maximum Single Source Range Length: 128 00:08:50.985 Maximum Copy Length: 128 00:08:50.985 Maximum Source Range Count: 128 00:08:50.985 NGUID/EUI64 Never Reused: No 00:08:50.985 Namespace Write Protected: No 00:08:50.985 Number of LBA Formats: 8 00:08:50.985 Current LBA Format: LBA Format #04 00:08:50.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.985 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.985 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.985 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.985 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.985 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.985 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.985 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.985 00:08:50.985 NVM Specific Namespace Data 00:08:50.985 =========================== 00:08:50.985 Logical Block Storage Tag Mask: 0 00:08:50.985 Protection Information Capabilities: 00:08:50.985 16b Guard Protection Information Storage Tag Support: No 00:08:50.985 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.985 Storage Tag Check Read Support: No 00:08:50.985 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Namespace ID:2 00:08:50.985 Error Recovery Timeout: Unlimited 00:08:50.985 Command Set Identifier: NVM (00h) 00:08:50.985 Deallocate: Supported 00:08:50.985 Deallocated/Unwritten Error: Supported 00:08:50.985 Deallocated Read Value: All 0x00 00:08:50.985 Deallocate in Write Zeroes: Not Supported 00:08:50.985 Deallocated Guard Field: 0xFFFF 00:08:50.985 Flush: Supported 00:08:50.985 Reservation: Not Supported 00:08:50.985 Namespace Sharing Capabilities: Private 00:08:50.985 Size (in LBAs): 1048576 (4GiB) 00:08:50.985 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.985 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.985 Thin Provisioning: Not Supported 00:08:50.985 Per-NS Atomic Units: No 00:08:50.985 Maximum Single Source Range Length: 128 00:08:50.985 Maximum Copy Length: 128 00:08:50.985 Maximum Source Range Count: 128 00:08:50.985 NGUID/EUI64 Never Reused: No 00:08:50.985 Namespace Write Protected: No 00:08:50.985 Number of LBA Formats: 8 00:08:50.985 Current LBA Format: LBA Format #04 00:08:50.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.985 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.985 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.985 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.985 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.985 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.985 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.985 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.985 00:08:50.985 NVM Specific Namespace Data 00:08:50.985 =========================== 00:08:50.985 Logical Block Storage Tag Mask: 0 00:08:50.985 Protection Information Capabilities: 00:08:50.985 16b Guard Protection Information Storage Tag Support: No 00:08:50.985 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.985 Storage Tag Check Read Support: No 00:08:50.985 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Namespace ID:3 00:08:50.985 Error Recovery Timeout: Unlimited 00:08:50.985 Command Set Identifier: NVM (00h) 00:08:50.985 Deallocate: Supported 00:08:50.985 Deallocated/Unwritten Error: Supported 00:08:50.985 Deallocated Read Value: All 0x00 00:08:50.985 Deallocate in Write Zeroes: Not Supported 00:08:50.985 Deallocated Guard Field: 0xFFFF 00:08:50.985 Flush: Supported 00:08:50.985 Reservation: Not Supported 00:08:50.985 Namespace Sharing Capabilities: Private 00:08:50.985 Size (in LBAs): 1048576 (4GiB) 00:08:50.985 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.985 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.985 Thin Provisioning: Not Supported 00:08:50.985 Per-NS Atomic Units: No 00:08:50.985 Maximum Single Source Range Length: 128 00:08:50.985 Maximum Copy Length: 128 00:08:50.985 Maximum Source Range Count: 128 00:08:50.985 NGUID/EUI64 Never Reused: No 00:08:50.985 Namespace Write Protected: No 00:08:50.985 Number of LBA Formats: 8 00:08:50.985 Current LBA Format: LBA Format #04 00:08:50.985 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.985 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.985 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.985 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.985 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.985 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.985 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.985 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.985 00:08:50.985 NVM Specific Namespace Data 00:08:50.985 =========================== 00:08:50.985 Logical Block Storage Tag Mask: 0 00:08:50.985 Protection Information Capabilities: 00:08:50.985 16b Guard Protection Information Storage Tag Support: No 00:08:50.985 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.985 Storage Tag Check Read Support: No 00:08:50.985 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.985 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.985 10:07:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:50.985 ===================================================== 00:08:50.985 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.985 ===================================================== 00:08:50.985 Controller Capabilities/Features 00:08:50.985 ================================ 00:08:50.985 Vendor ID: 1b36 00:08:50.985 Subsystem Vendor ID: 1af4 00:08:50.985 Serial Number: 12343 00:08:50.985 Model Number: QEMU NVMe Ctrl 00:08:50.985 Firmware Version: 8.0.0 00:08:50.985 Recommended Arb Burst: 6 00:08:50.985 IEEE OUI Identifier: 00 54 52 00:08:50.985 Multi-path I/O 00:08:50.985 May have multiple subsystem ports: No 00:08:50.985 May have multiple controllers: Yes 00:08:50.986 Associated with SR-IOV VF: No 00:08:50.986 Max Data Transfer Size: 524288 00:08:50.986 Max Number of Namespaces: 256 00:08:50.986 Max Number of I/O Queues: 64 00:08:50.986 NVMe Specification Version (VS): 1.4 00:08:50.986 NVMe Specification Version (Identify): 1.4 00:08:50.986 Maximum Queue Entries: 2048 00:08:50.986 Contiguous Queues Required: Yes 00:08:50.986 Arbitration Mechanisms Supported 00:08:50.986 Weighted Round Robin: Not Supported 00:08:50.986 Vendor Specific: Not Supported 00:08:50.986 Reset Timeout: 7500 ms 00:08:50.986 Doorbell Stride: 4 bytes 00:08:50.986 NVM Subsystem Reset: Not Supported 00:08:50.986 Command Sets Supported 00:08:50.986 NVM Command Set: Supported 00:08:50.986 Boot Partition: Not Supported 00:08:50.986 Memory Page Size Minimum: 4096 bytes 00:08:50.986 Memory Page Size Maximum: 65536 bytes 00:08:50.986 Persistent Memory Region: Not Supported 00:08:50.986 Optional Asynchronous Events Supported 00:08:50.986 Namespace Attribute Notices: Supported 00:08:50.986 Firmware Activation Notices: Not Supported 00:08:50.986 ANA Change Notices: Not Supported 00:08:50.986 PLE Aggregate Log Change Notices: Not Supported 00:08:50.986 LBA Status Info Alert Notices: Not Supported 00:08:50.986 EGE Aggregate Log Change Notices: Not Supported 00:08:50.986 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.986 Zone Descriptor Change Notices: Not Supported 00:08:50.986 Discovery Log Change Notices: Not Supported 00:08:50.986 Controller Attributes 00:08:50.986 128-bit Host Identifier: Not Supported 00:08:50.986 Non-Operational Permissive Mode: Not Supported 00:08:50.986 NVM Sets: Not Supported 00:08:50.986 Read Recovery Levels: Not Supported 00:08:50.986 Endurance Groups: Supported 00:08:50.986 Predictable Latency Mode: Not Supported 00:08:50.986 Traffic Based Keep ALive: Not Supported 00:08:50.986 Namespace Granularity: Not Supported 00:08:50.986 SQ Associations: Not Supported 00:08:50.986 UUID List: Not Supported 00:08:50.986 Multi-Domain Subsystem: Not Supported 00:08:50.986 Fixed Capacity Management: Not Supported 00:08:50.986 Variable Capacity Management: Not Supported 00:08:50.986 Delete Endurance Group: Not Supported 00:08:50.986 Delete NVM Set: Not Supported 00:08:50.986 Extended LBA Formats Supported: Supported 00:08:50.986 Flexible Data Placement Supported: Supported 00:08:50.986 00:08:50.986 Controller Memory Buffer Support 00:08:50.986 ================================ 00:08:50.986 Supported: No 00:08:50.986 00:08:50.986 Persistent Memory Region Support 00:08:50.986 ================================ 00:08:50.986 Supported: No 00:08:50.986 00:08:50.986 Admin Command Set Attributes 00:08:50.986 ============================ 00:08:50.986 Security Send/Receive: Not Supported 00:08:50.986 Format NVM: Supported 00:08:50.986 Firmware Activate/Download: Not Supported 00:08:50.986 Namespace Management: Supported 00:08:50.986 Device Self-Test: Not Supported 00:08:50.986 Directives: Supported 00:08:50.986 NVMe-MI: Not Supported 00:08:50.986 Virtualization Management: Not Supported 00:08:50.986 Doorbell Buffer Config: Supported 00:08:50.986 Get LBA Status Capability: Not Supported 00:08:50.986 Command & Feature Lockdown Capability: Not Supported 00:08:50.986 Abort Command Limit: 4 00:08:50.986 Async Event Request Limit: 4 00:08:50.986 Number of Firmware Slots: N/A 00:08:50.986 Firmware Slot 1 Read-Only: N/A 00:08:50.986 Firmware Activation Without Reset: N/A 00:08:50.986 Multiple Update Detection Support: N/A 00:08:50.986 Firmware Update Granularity: No Information Provided 00:08:50.986 Per-Namespace SMART Log: Yes 00:08:50.986 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.986 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:50.986 Command Effects Log Page: Supported 00:08:50.986 Get Log Page Extended Data: Supported 00:08:50.986 Telemetry Log Pages: Not Supported 00:08:50.986 Persistent Event Log Pages: Not Supported 00:08:50.986 Supported Log Pages Log Page: May Support 00:08:50.986 Commands Supported & Effects Log Page: Not Supported 00:08:50.986 Feature Identifiers & Effects Log Page:May Support 00:08:50.986 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.986 Data Area 4 for Telemetry Log: Not Supported 00:08:50.986 Error Log Page Entries Supported: 1 00:08:50.986 Keep Alive: Not Supported 00:08:50.986 00:08:50.986 NVM Command Set Attributes 00:08:50.986 ========================== 00:08:50.986 Submission Queue Entry Size 00:08:50.986 Max: 64 00:08:50.986 Min: 64 00:08:50.986 Completion Queue Entry Size 00:08:50.986 Max: 16 00:08:50.986 Min: 16 00:08:50.986 Number of Namespaces: 256 00:08:50.986 Compare Command: Supported 00:08:50.986 Write Uncorrectable Command: Not Supported 00:08:50.986 Dataset Management Command: Supported 00:08:50.986 Write Zeroes Command: Supported 00:08:50.986 Set Features Save Field: Supported 00:08:50.986 Reservations: Not Supported 00:08:50.986 Timestamp: Supported 00:08:50.986 Copy: Supported 00:08:50.986 Volatile Write Cache: Present 00:08:50.986 Atomic Write Unit (Normal): 1 00:08:50.986 Atomic Write Unit (PFail): 1 00:08:50.986 Atomic Compare & Write Unit: 1 00:08:50.986 Fused Compare & Write: Not Supported 00:08:50.986 Scatter-Gather List 00:08:50.986 SGL Command Set: Supported 00:08:50.986 SGL Keyed: Not Supported 00:08:50.986 SGL Bit Bucket Descriptor: Not Supported 00:08:50.986 SGL Metadata Pointer: Not Supported 00:08:50.986 Oversized SGL: Not Supported 00:08:50.986 SGL Metadata Address: Not Supported 00:08:50.986 SGL Offset: Not Supported 00:08:50.986 Transport SGL Data Block: Not Supported 00:08:50.986 Replay Protected Memory Block: Not Supported 00:08:50.986 00:08:50.986 Firmware Slot Information 00:08:50.986 ========================= 00:08:50.986 Active slot: 1 00:08:50.986 Slot 1 Firmware Revision: 1.0 00:08:50.986 00:08:50.986 00:08:50.986 Commands Supported and Effects 00:08:50.986 ============================== 00:08:50.986 Admin Commands 00:08:50.986 -------------- 00:08:50.986 Delete I/O Submission Queue (00h): Supported 00:08:50.986 Create I/O Submission Queue (01h): Supported 00:08:50.986 Get Log Page (02h): Supported 00:08:50.986 Delete I/O Completion Queue (04h): Supported 00:08:50.986 Create I/O Completion Queue (05h): Supported 00:08:50.986 Identify (06h): Supported 00:08:50.986 Abort (08h): Supported 00:08:50.986 Set Features (09h): Supported 00:08:50.986 Get Features (0Ah): Supported 00:08:50.986 Asynchronous Event Request (0Ch): Supported 00:08:50.986 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.986 Directive Send (19h): Supported 00:08:50.986 Directive Receive (1Ah): Supported 00:08:50.986 Virtualization Management (1Ch): Supported 00:08:50.986 Doorbell Buffer Config (7Ch): Supported 00:08:50.986 Format NVM (80h): Supported LBA-Change 00:08:50.986 I/O Commands 00:08:50.986 ------------ 00:08:50.986 Flush (00h): Supported LBA-Change 00:08:50.986 Write (01h): Supported LBA-Change 00:08:50.986 Read (02h): Supported 00:08:50.986 Compare (05h): Supported 00:08:50.986 Write Zeroes (08h): Supported LBA-Change 00:08:50.986 Dataset Management (09h): Supported LBA-Change 00:08:50.986 Unknown (0Ch): Supported 00:08:50.986 Unknown (12h): Supported 00:08:50.986 Copy (19h): Supported LBA-Change 00:08:50.986 Unknown (1Dh): Supported LBA-Change 00:08:50.986 00:08:50.986 Error Log 00:08:50.986 ========= 00:08:50.986 00:08:50.986 Arbitration 00:08:50.986 =========== 00:08:50.986 Arbitration Burst: no limit 00:08:50.986 00:08:50.986 Power Management 00:08:50.986 ================ 00:08:50.986 Number of Power States: 1 00:08:50.986 Current Power State: Power State #0 00:08:50.986 Power State #0: 00:08:50.986 Max Power: 25.00 W 00:08:50.986 Non-Operational State: Operational 00:08:50.986 Entry Latency: 16 microseconds 00:08:50.986 Exit Latency: 4 microseconds 00:08:50.986 Relative Read Throughput: 0 00:08:50.986 Relative Read Latency: 0 00:08:50.986 Relative Write Throughput: 0 00:08:50.986 Relative Write Latency: 0 00:08:50.986 Idle Power: Not Reported 00:08:50.986 Active Power: Not Reported 00:08:50.986 Non-Operational Permissive Mode: Not Supported 00:08:50.986 00:08:50.986 Health Information 00:08:50.986 ================== 00:08:50.986 Critical Warnings: 00:08:50.986 Available Spare Space: OK 00:08:50.986 Temperature: OK 00:08:50.986 Device Reliability: OK 00:08:50.986 Read Only: No 00:08:50.986 Volatile Memory Backup: OK 00:08:50.986 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.986 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.986 Available Spare: 0% 00:08:50.986 Available Spare Threshold: 0% 00:08:50.986 Life Percentage Used: 0% 00:08:50.986 Data Units Read: 737 00:08:50.986 Data Units Written: 666 00:08:50.986 Host Read Commands: 32412 00:08:50.986 Host Write Commands: 31835 00:08:50.986 Controller Busy Time: 0 minutes 00:08:50.986 Power Cycles: 0 00:08:50.986 Power On Hours: 0 hours 00:08:50.986 Unsafe Shutdowns: 0 00:08:50.986 Unrecoverable Media Errors: 0 00:08:50.986 Lifetime Error Log Entries: 0 00:08:50.986 Warning Temperature Time: 0 minutes 00:08:50.986 Critical Temperature Time: 0 minutes 00:08:50.986 00:08:50.986 Number of Queues 00:08:50.986 ================ 00:08:50.986 Number of I/O Submission Queues: 64 00:08:50.986 Number of I/O Completion Queues: 64 00:08:50.987 00:08:50.987 ZNS Specific Controller Data 00:08:50.987 ============================ 00:08:50.987 Zone Append Size Limit: 0 00:08:50.987 00:08:50.987 00:08:50.987 Active Namespaces 00:08:50.987 ================= 00:08:50.987 Namespace ID:1 00:08:50.987 Error Recovery Timeout: Unlimited 00:08:50.987 Command Set Identifier: NVM (00h) 00:08:50.987 Deallocate: Supported 00:08:50.987 Deallocated/Unwritten Error: Supported 00:08:50.987 Deallocated Read Value: All 0x00 00:08:50.987 Deallocate in Write Zeroes: Not Supported 00:08:50.987 Deallocated Guard Field: 0xFFFF 00:08:50.987 Flush: Supported 00:08:50.987 Reservation: Not Supported 00:08:50.987 Namespace Sharing Capabilities: Multiple Controllers 00:08:50.987 Size (in LBAs): 262144 (1GiB) 00:08:50.987 Capacity (in LBAs): 262144 (1GiB) 00:08:50.987 Utilization (in LBAs): 262144 (1GiB) 00:08:50.987 Thin Provisioning: Not Supported 00:08:50.987 Per-NS Atomic Units: No 00:08:50.987 Maximum Single Source Range Length: 128 00:08:50.987 Maximum Copy Length: 128 00:08:50.987 Maximum Source Range Count: 128 00:08:50.987 NGUID/EUI64 Never Reused: No 00:08:50.987 Namespace Write Protected: No 00:08:50.987 Endurance group ID: 1 00:08:50.987 Number of LBA Formats: 8 00:08:50.987 Current LBA Format: LBA Format #04 00:08:50.987 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.987 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.987 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.987 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.987 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.987 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.987 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.987 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.987 00:08:50.987 Get Feature FDP: 00:08:50.987 ================ 00:08:50.987 Enabled: Yes 00:08:50.987 FDP configuration index: 0 00:08:50.987 00:08:50.987 FDP configurations log page 00:08:50.987 =========================== 00:08:50.987 Number of FDP configurations: 1 00:08:50.987 Version: 0 00:08:50.987 Size: 112 00:08:50.987 FDP Configuration Descriptor: 0 00:08:50.987 Descriptor Size: 96 00:08:50.987 Reclaim Group Identifier format: 2 00:08:50.987 FDP Volatile Write Cache: Not Present 00:08:50.987 FDP Configuration: Valid 00:08:50.987 Vendor Specific Size: 0 00:08:50.987 Number of Reclaim Groups: 2 00:08:50.987 Number of Recalim Unit Handles: 8 00:08:50.987 Max Placement Identifiers: 128 00:08:50.987 Number of Namespaces Suppprted: 256 00:08:50.987 Reclaim unit Nominal Size: 6000000 bytes 00:08:50.987 Estimated Reclaim Unit Time Limit: Not Reported 00:08:50.987 RUH Desc #000: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #001: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #002: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #003: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #004: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #005: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #006: RUH Type: Initially Isolated 00:08:50.987 RUH Desc #007: RUH Type: Initially Isolated 00:08:50.987 00:08:50.987 FDP reclaim unit handle usage log page 00:08:51.246 ====================================== 00:08:51.246 Number of Reclaim Unit Handles: 8 00:08:51.246 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:51.246 RUH Usage Desc #001: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #002: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #003: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #004: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #005: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #006: RUH Attributes: Unused 00:08:51.246 RUH Usage Desc #007: RUH Attributes: Unused 00:08:51.246 00:08:51.246 FDP statistics log page 00:08:51.246 ======================= 00:08:51.246 Host bytes with metadata written: 416063488 00:08:51.246 Media bytes with metadata written: 416108544 00:08:51.246 Media bytes erased: 0 00:08:51.246 00:08:51.246 FDP events log page 00:08:51.246 =================== 00:08:51.246 Number of FDP events: 0 00:08:51.246 00:08:51.246 NVM Specific Namespace Data 00:08:51.246 =========================== 00:08:51.246 Logical Block Storage Tag Mask: 0 00:08:51.246 Protection Information Capabilities: 00:08:51.246 16b Guard Protection Information Storage Tag Support: No 00:08:51.246 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.246 Storage Tag Check Read Support: No 00:08:51.246 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.246 00:08:51.246 real 0m1.178s 00:08:51.246 user 0m0.419s 00:08:51.246 sys 0m0.555s 00:08:51.246 10:07:54 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.246 ************************************ 00:08:51.246 END TEST nvme_identify 00:08:51.246 ************************************ 00:08:51.246 10:07:54 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:51.246 10:07:54 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:51.246 10:07:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.246 10:07:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.246 10:07:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.246 ************************************ 00:08:51.246 START TEST nvme_perf 00:08:51.246 ************************************ 00:08:51.246 10:07:54 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:51.246 10:07:54 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:52.627 Initializing NVMe Controllers 00:08:52.627 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.627 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.627 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.627 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.627 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:52.627 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:52.627 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:52.627 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:52.627 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:52.627 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:52.627 Initialization complete. Launching workers. 00:08:52.627 ======================================================== 00:08:52.627 Latency(us) 00:08:52.627 Device Information : IOPS MiB/s Average min max 00:08:52.627 PCIE (0000:00:11.0) NSID 1 from core 0: 7724.07 90.52 16602.34 10292.58 39735.84 00:08:52.627 PCIE (0000:00:13.0) NSID 1 from core 0: 7724.07 90.52 16582.77 10303.81 38821.51 00:08:52.627 PCIE (0000:00:10.0) NSID 1 from core 0: 7724.07 90.52 16557.21 10163.89 38170.80 00:08:52.627 PCIE (0000:00:12.0) NSID 1 from core 0: 7724.07 90.52 16533.01 9989.32 37032.49 00:08:52.627 PCIE (0000:00:12.0) NSID 2 from core 0: 7724.07 90.52 16508.11 9382.74 36209.21 00:08:52.627 PCIE (0000:00:12.0) NSID 3 from core 0: 7787.91 91.26 16349.25 9538.36 26179.40 00:08:52.627 ======================================================== 00:08:52.627 Total : 46408.27 543.85 16521.88 9382.74 39735.84 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10989.883us 00:08:52.627 10.00000% : 13308.849us 00:08:52.627 25.00000% : 14317.095us 00:08:52.627 50.00000% : 16031.114us 00:08:52.627 75.00000% : 18551.729us 00:08:52.627 90.00000% : 20265.748us 00:08:52.627 95.00000% : 21374.818us 00:08:52.627 98.00000% : 24298.732us 00:08:52.627 99.00000% : 31053.982us 00:08:52.627 99.50000% : 38313.354us 00:08:52.627 99.90000% : 39523.249us 00:08:52.627 99.99000% : 39926.548us 00:08:52.627 99.99900% : 39926.548us 00:08:52.627 99.99990% : 39926.548us 00:08:52.627 99.99999% : 39926.548us 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10939.471us 00:08:52.627 10.00000% : 13308.849us 00:08:52.627 25.00000% : 14317.095us 00:08:52.627 50.00000% : 16131.938us 00:08:52.627 75.00000% : 18551.729us 00:08:52.627 90.00000% : 20265.748us 00:08:52.627 95.00000% : 21273.994us 00:08:52.627 98.00000% : 23895.434us 00:08:52.627 99.00000% : 29844.086us 00:08:52.627 99.50000% : 37506.757us 00:08:52.627 99.90000% : 38716.652us 00:08:52.627 99.99000% : 38918.302us 00:08:52.627 99.99900% : 38918.302us 00:08:52.627 99.99990% : 38918.302us 00:08:52.627 99.99999% : 38918.302us 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10838.646us 00:08:52.627 10.00000% : 13107.200us 00:08:52.627 25.00000% : 14317.095us 00:08:52.627 50.00000% : 16131.938us 00:08:52.627 75.00000% : 18450.905us 00:08:52.627 90.00000% : 20366.572us 00:08:52.627 95.00000% : 21374.818us 00:08:52.627 98.00000% : 23895.434us 00:08:52.627 99.00000% : 28432.542us 00:08:52.627 99.50000% : 36700.160us 00:08:52.627 99.90000% : 37910.055us 00:08:52.627 99.99000% : 38313.354us 00:08:52.627 99.99900% : 38313.354us 00:08:52.627 99.99990% : 38313.354us 00:08:52.627 99.99999% : 38313.354us 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10687.409us 00:08:52.627 10.00000% : 13208.025us 00:08:52.627 25.00000% : 14317.095us 00:08:52.627 50.00000% : 16031.114us 00:08:52.627 75.00000% : 18652.554us 00:08:52.627 90.00000% : 20164.923us 00:08:52.627 95.00000% : 21072.345us 00:08:52.627 98.00000% : 23592.960us 00:08:52.627 99.00000% : 26819.348us 00:08:52.627 99.50000% : 35691.914us 00:08:52.627 99.90000% : 36901.809us 00:08:52.627 99.99000% : 37103.458us 00:08:52.627 99.99900% : 37103.458us 00:08:52.627 99.99990% : 37103.458us 00:08:52.627 99.99999% : 37103.458us 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10838.646us 00:08:52.627 10.00000% : 13208.025us 00:08:52.627 25.00000% : 14317.095us 00:08:52.627 50.00000% : 16131.938us 00:08:52.627 75.00000% : 18551.729us 00:08:52.627 90.00000% : 20164.923us 00:08:52.627 95.00000% : 21273.994us 00:08:52.627 98.00000% : 24097.083us 00:08:52.627 99.00000% : 26214.400us 00:08:52.627 99.50000% : 34885.317us 00:08:52.627 99.90000% : 36095.212us 00:08:52.627 99.99000% : 36296.862us 00:08:52.627 99.99900% : 36296.862us 00:08:52.627 99.99990% : 36296.862us 00:08:52.627 99.99999% : 36296.862us 00:08:52.627 00:08:52.627 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:52.627 ================================================================================= 00:08:52.627 1.00000% : 10788.234us 00:08:52.627 10.00000% : 13208.025us 00:08:52.627 25.00000% : 14216.271us 00:08:52.627 50.00000% : 16031.114us 00:08:52.627 75.00000% : 18551.729us 00:08:52.627 90.00000% : 20064.098us 00:08:52.627 95.00000% : 21072.345us 00:08:52.627 98.00000% : 22887.188us 00:08:52.627 99.00000% : 24702.031us 00:08:52.627 99.50000% : 25306.978us 00:08:52.627 99.90000% : 26214.400us 00:08:52.627 99.99000% : 26214.400us 00:08:52.627 99.99900% : 26214.400us 00:08:52.627 99.99990% : 26214.400us 00:08:52.627 99.99999% : 26214.400us 00:08:52.627 00:08:52.627 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:52.627 ============================================================================== 00:08:52.627 Range in us Cumulative IO count 00:08:52.627 10284.111 - 10334.523: 0.0517% ( 4) 00:08:52.627 10334.523 - 10384.935: 0.0775% ( 2) 00:08:52.627 10384.935 - 10435.348: 0.1162% ( 3) 00:08:52.627 10435.348 - 10485.760: 0.1679% ( 4) 00:08:52.627 10485.760 - 10536.172: 0.2066% ( 3) 00:08:52.627 10536.172 - 10586.585: 0.3099% ( 8) 00:08:52.627 10586.585 - 10636.997: 0.3487% ( 3) 00:08:52.627 10636.997 - 10687.409: 0.4003% ( 4) 00:08:52.627 10687.409 - 10737.822: 0.5036% ( 8) 00:08:52.627 10737.822 - 10788.234: 0.6327% ( 10) 00:08:52.627 10788.234 - 10838.646: 0.8006% ( 13) 00:08:52.627 10838.646 - 10889.058: 0.9168% ( 9) 00:08:52.628 10889.058 - 10939.471: 0.9943% ( 6) 00:08:52.628 10939.471 - 10989.883: 1.0718% ( 6) 00:08:52.628 10989.883 - 11040.295: 1.2138% ( 11) 00:08:52.628 11040.295 - 11090.708: 1.3559% ( 11) 00:08:52.628 11090.708 - 11141.120: 1.5754% ( 17) 00:08:52.628 11141.120 - 11191.532: 1.7562% ( 14) 00:08:52.628 11191.532 - 11241.945: 1.9241% ( 13) 00:08:52.628 11241.945 - 11292.357: 2.0919% ( 13) 00:08:52.628 11292.357 - 11342.769: 2.2727% ( 14) 00:08:52.628 11342.769 - 11393.182: 2.4406% ( 13) 00:08:52.628 11393.182 - 11443.594: 2.6214% ( 14) 00:08:52.628 11443.594 - 11494.006: 2.7893% ( 13) 00:08:52.628 11494.006 - 11544.418: 2.9442% ( 12) 00:08:52.628 11544.418 - 11594.831: 3.0863% ( 11) 00:08:52.628 11594.831 - 11645.243: 3.2283% ( 11) 00:08:52.628 11645.243 - 11695.655: 3.4091% ( 14) 00:08:52.628 11695.655 - 11746.068: 3.5253% ( 9) 00:08:52.628 11746.068 - 11796.480: 3.6932% ( 13) 00:08:52.628 11796.480 - 11846.892: 3.8481% ( 12) 00:08:52.628 11846.892 - 11897.305: 4.0289% ( 14) 00:08:52.628 11897.305 - 11947.717: 4.2097% ( 14) 00:08:52.628 11947.717 - 11998.129: 4.4163% ( 16) 00:08:52.628 11998.129 - 12048.542: 4.6229% ( 16) 00:08:52.628 12048.542 - 12098.954: 4.8166% ( 15) 00:08:52.628 12098.954 - 12149.366: 4.9845% ( 13) 00:08:52.628 12149.366 - 12199.778: 5.1524% ( 13) 00:08:52.628 12199.778 - 12250.191: 5.3461% ( 15) 00:08:52.628 12250.191 - 12300.603: 5.5269% ( 14) 00:08:52.628 12300.603 - 12351.015: 5.6818% ( 12) 00:08:52.628 12351.015 - 12401.428: 5.8368% ( 12) 00:08:52.628 12401.428 - 12451.840: 5.9788% ( 11) 00:08:52.628 12451.840 - 12502.252: 6.1338% ( 12) 00:08:52.628 12502.252 - 12552.665: 6.3275% ( 15) 00:08:52.628 12552.665 - 12603.077: 6.5212% ( 15) 00:08:52.628 12603.077 - 12653.489: 6.7020% ( 14) 00:08:52.628 12653.489 - 12703.902: 6.9086% ( 16) 00:08:52.628 12703.902 - 12754.314: 7.1798% ( 21) 00:08:52.628 12754.314 - 12804.726: 7.3993% ( 17) 00:08:52.628 12804.726 - 12855.138: 7.6059% ( 16) 00:08:52.628 12855.138 - 12905.551: 7.7608% ( 12) 00:08:52.628 12905.551 - 13006.375: 8.2257% ( 36) 00:08:52.628 13006.375 - 13107.200: 8.8456% ( 48) 00:08:52.628 13107.200 - 13208.025: 9.5300% ( 53) 00:08:52.628 13208.025 - 13308.849: 10.3435% ( 63) 00:08:52.628 13308.849 - 13409.674: 11.3507% ( 78) 00:08:52.628 13409.674 - 13510.498: 12.6420% ( 100) 00:08:52.628 13510.498 - 13611.323: 14.0108% ( 106) 00:08:52.628 13611.323 - 13712.148: 15.3796% ( 106) 00:08:52.628 13712.148 - 13812.972: 17.3554% ( 153) 00:08:52.628 13812.972 - 13913.797: 19.0728% ( 133) 00:08:52.628 13913.797 - 14014.622: 20.9582% ( 146) 00:08:52.628 14014.622 - 14115.446: 22.7402% ( 138) 00:08:52.628 14115.446 - 14216.271: 24.5480% ( 140) 00:08:52.628 14216.271 - 14317.095: 26.2397% ( 131) 00:08:52.628 14317.095 - 14417.920: 28.1379% ( 147) 00:08:52.628 14417.920 - 14518.745: 29.8295% ( 131) 00:08:52.628 14518.745 - 14619.569: 31.5212% ( 131) 00:08:52.628 14619.569 - 14720.394: 33.2645% ( 135) 00:08:52.628 14720.394 - 14821.218: 34.7882% ( 118) 00:08:52.628 14821.218 - 14922.043: 36.4540% ( 129) 00:08:52.628 14922.043 - 15022.868: 37.8099% ( 105) 00:08:52.628 15022.868 - 15123.692: 39.1400% ( 103) 00:08:52.628 15123.692 - 15224.517: 40.3796% ( 96) 00:08:52.628 15224.517 - 15325.342: 41.6451% ( 98) 00:08:52.628 15325.342 - 15426.166: 43.0139% ( 106) 00:08:52.628 15426.166 - 15526.991: 44.1245% ( 86) 00:08:52.628 15526.991 - 15627.815: 45.3512% ( 95) 00:08:52.628 15627.815 - 15728.640: 46.8104% ( 113) 00:08:52.628 15728.640 - 15829.465: 47.9985% ( 92) 00:08:52.628 15829.465 - 15930.289: 49.1477% ( 89) 00:08:52.628 15930.289 - 16031.114: 50.1679% ( 79) 00:08:52.628 16031.114 - 16131.938: 51.2784% ( 86) 00:08:52.628 16131.938 - 16232.763: 52.4277% ( 89) 00:08:52.628 16232.763 - 16333.588: 53.5899% ( 90) 00:08:52.628 16333.588 - 16434.412: 54.7521% ( 90) 00:08:52.628 16434.412 - 16535.237: 55.9401% ( 92) 00:08:52.628 16535.237 - 16636.062: 56.9731% ( 80) 00:08:52.628 16636.062 - 16736.886: 58.0579% ( 84) 00:08:52.628 16736.886 - 16837.711: 59.2071% ( 89) 00:08:52.628 16837.711 - 16938.535: 60.3693% ( 90) 00:08:52.628 16938.535 - 17039.360: 61.4540% ( 84) 00:08:52.628 17039.360 - 17140.185: 62.4742% ( 79) 00:08:52.628 17140.185 - 17241.009: 63.4427% ( 75) 00:08:52.628 17241.009 - 17341.834: 64.4628% ( 79) 00:08:52.628 17341.834 - 17442.658: 65.4959% ( 80) 00:08:52.628 17442.658 - 17543.483: 66.3611% ( 67) 00:08:52.628 17543.483 - 17644.308: 67.3425% ( 76) 00:08:52.628 17644.308 - 17745.132: 68.1689% ( 64) 00:08:52.628 17745.132 - 17845.957: 69.0083% ( 65) 00:08:52.628 17845.957 - 17946.782: 69.9509% ( 73) 00:08:52.628 17946.782 - 18047.606: 70.8549% ( 70) 00:08:52.628 18047.606 - 18148.431: 71.7588% ( 70) 00:08:52.628 18148.431 - 18249.255: 72.5981% ( 65) 00:08:52.628 18249.255 - 18350.080: 73.4504% ( 66) 00:08:52.628 18350.080 - 18450.905: 74.3543% ( 70) 00:08:52.628 18450.905 - 18551.729: 75.2324% ( 68) 00:08:52.628 18551.729 - 18652.554: 76.1364% ( 70) 00:08:52.628 18652.554 - 18753.378: 77.0661% ( 72) 00:08:52.628 18753.378 - 18854.203: 78.0088% ( 73) 00:08:52.628 18854.203 - 18955.028: 79.0160% ( 78) 00:08:52.628 18955.028 - 19055.852: 79.9587% ( 73) 00:08:52.628 19055.852 - 19156.677: 80.9013% ( 73) 00:08:52.628 19156.677 - 19257.502: 81.9086% ( 78) 00:08:52.628 19257.502 - 19358.326: 82.8900% ( 76) 00:08:52.628 19358.326 - 19459.151: 83.8585% ( 75) 00:08:52.628 19459.151 - 19559.975: 84.8011% ( 73) 00:08:52.628 19559.975 - 19660.800: 85.6147% ( 63) 00:08:52.628 19660.800 - 19761.625: 86.5315% ( 71) 00:08:52.628 19761.625 - 19862.449: 87.5000% ( 75) 00:08:52.628 19862.449 - 19963.274: 88.4168% ( 71) 00:08:52.628 19963.274 - 20064.098: 89.1916% ( 60) 00:08:52.628 20064.098 - 20164.923: 89.7986% ( 47) 00:08:52.628 20164.923 - 20265.748: 90.3667% ( 44) 00:08:52.628 20265.748 - 20366.572: 90.9091% ( 42) 00:08:52.628 20366.572 - 20467.397: 91.4385% ( 41) 00:08:52.628 20467.397 - 20568.222: 92.0455% ( 47) 00:08:52.628 20568.222 - 20669.046: 92.6524% ( 47) 00:08:52.628 20669.046 - 20769.871: 93.1302% ( 37) 00:08:52.628 20769.871 - 20870.695: 93.5950% ( 36) 00:08:52.628 20870.695 - 20971.520: 94.0083% ( 32) 00:08:52.628 20971.520 - 21072.345: 94.3569% ( 27) 00:08:52.628 21072.345 - 21173.169: 94.6668% ( 24) 00:08:52.628 21173.169 - 21273.994: 94.9251% ( 20) 00:08:52.628 21273.994 - 21374.818: 95.1188% ( 15) 00:08:52.628 21374.818 - 21475.643: 95.3125% ( 15) 00:08:52.628 21475.643 - 21576.468: 95.4675% ( 12) 00:08:52.628 21576.468 - 21677.292: 95.6224% ( 12) 00:08:52.628 21677.292 - 21778.117: 95.7515% ( 10) 00:08:52.628 21778.117 - 21878.942: 95.9194% ( 13) 00:08:52.628 21878.942 - 21979.766: 96.1002% ( 14) 00:08:52.628 21979.766 - 22080.591: 96.3197% ( 17) 00:08:52.628 22080.591 - 22181.415: 96.5005% ( 14) 00:08:52.628 22181.415 - 22282.240: 96.6167% ( 9) 00:08:52.628 22282.240 - 22383.065: 96.7459% ( 10) 00:08:52.628 22383.065 - 22483.889: 96.8363% ( 7) 00:08:52.628 22483.889 - 22584.714: 96.8879% ( 4) 00:08:52.628 22584.714 - 22685.538: 96.9783% ( 7) 00:08:52.628 22685.538 - 22786.363: 97.0429% ( 5) 00:08:52.628 22786.363 - 22887.188: 97.1204% ( 6) 00:08:52.628 22887.188 - 22988.012: 97.2237% ( 8) 00:08:52.628 22988.012 - 23088.837: 97.3915% ( 13) 00:08:52.628 23088.837 - 23189.662: 97.5207% ( 10) 00:08:52.628 23189.662 - 23290.486: 97.6111% ( 7) 00:08:52.628 23290.486 - 23391.311: 97.6756% ( 5) 00:08:52.628 23391.311 - 23492.135: 97.7273% ( 4) 00:08:52.628 23492.135 - 23592.960: 97.7918% ( 5) 00:08:52.628 23592.960 - 23693.785: 97.8306% ( 3) 00:08:52.628 23693.785 - 23794.609: 97.8693% ( 3) 00:08:52.628 23794.609 - 23895.434: 97.8951% ( 2) 00:08:52.628 23895.434 - 23996.258: 97.9339% ( 3) 00:08:52.628 23996.258 - 24097.083: 97.9597% ( 2) 00:08:52.628 24097.083 - 24197.908: 97.9985% ( 3) 00:08:52.628 24197.908 - 24298.732: 98.0372% ( 3) 00:08:52.628 24298.732 - 24399.557: 98.0888% ( 4) 00:08:52.628 24399.557 - 24500.382: 98.1405% ( 4) 00:08:52.628 24500.382 - 24601.206: 98.2051% ( 5) 00:08:52.628 24601.206 - 24702.031: 98.2567% ( 4) 00:08:52.628 24702.031 - 24802.855: 98.3084% ( 4) 00:08:52.628 24802.855 - 24903.680: 98.3471% ( 3) 00:08:52.628 29642.437 - 29844.086: 98.4375% ( 7) 00:08:52.628 29844.086 - 30045.735: 98.5537% ( 9) 00:08:52.628 30045.735 - 30247.385: 98.6570% ( 8) 00:08:52.628 30247.385 - 30449.034: 98.7603% ( 8) 00:08:52.628 30449.034 - 30650.683: 98.8765% ( 9) 00:08:52.628 30650.683 - 30852.332: 98.9799% ( 8) 00:08:52.628 30852.332 - 31053.982: 99.0961% ( 9) 00:08:52.628 31053.982 - 31255.631: 99.1736% ( 6) 00:08:52.628 37305.108 - 37506.757: 99.2123% ( 3) 00:08:52.628 37506.757 - 37708.406: 99.2769% ( 5) 00:08:52.628 37708.406 - 37910.055: 99.3543% ( 6) 00:08:52.628 37910.055 - 38111.705: 99.4318% ( 6) 00:08:52.628 38111.705 - 38313.354: 99.5093% ( 6) 00:08:52.628 38313.354 - 38515.003: 99.5610% ( 4) 00:08:52.628 38515.003 - 38716.652: 99.6255% ( 5) 00:08:52.628 38716.652 - 38918.302: 99.7030% ( 6) 00:08:52.628 38918.302 - 39119.951: 99.7676% ( 5) 00:08:52.628 39119.951 - 39321.600: 99.8450% ( 6) 00:08:52.628 39321.600 - 39523.249: 99.9225% ( 6) 00:08:52.628 39523.249 - 39724.898: 99.9871% ( 5) 00:08:52.628 39724.898 - 39926.548: 100.0000% ( 1) 00:08:52.628 00:08:52.628 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:52.628 ============================================================================== 00:08:52.628 Range in us Cumulative IO count 00:08:52.628 10284.111 - 10334.523: 0.0387% ( 3) 00:08:52.628 10334.523 - 10384.935: 0.0646% ( 2) 00:08:52.628 10384.935 - 10435.348: 0.1420% ( 6) 00:08:52.628 10435.348 - 10485.760: 0.2066% ( 5) 00:08:52.628 10485.760 - 10536.172: 0.2841% ( 6) 00:08:52.628 10536.172 - 10586.585: 0.3357% ( 4) 00:08:52.628 10586.585 - 10636.997: 0.4390% ( 8) 00:08:52.628 10636.997 - 10687.409: 0.5424% ( 8) 00:08:52.628 10687.409 - 10737.822: 0.6586% ( 9) 00:08:52.628 10737.822 - 10788.234: 0.7748% ( 9) 00:08:52.628 10788.234 - 10838.646: 0.8523% ( 6) 00:08:52.628 10838.646 - 10889.058: 0.9685% ( 9) 00:08:52.628 10889.058 - 10939.471: 1.1105% ( 11) 00:08:52.628 10939.471 - 10989.883: 1.2397% ( 10) 00:08:52.629 10989.883 - 11040.295: 1.4205% ( 14) 00:08:52.629 11040.295 - 11090.708: 1.6012% ( 14) 00:08:52.629 11090.708 - 11141.120: 1.7820% ( 14) 00:08:52.629 11141.120 - 11191.532: 1.9499% ( 13) 00:08:52.629 11191.532 - 11241.945: 2.1565% ( 16) 00:08:52.629 11241.945 - 11292.357: 2.3244% ( 13) 00:08:52.629 11292.357 - 11342.769: 2.5052% ( 14) 00:08:52.629 11342.769 - 11393.182: 2.6860% ( 14) 00:08:52.629 11393.182 - 11443.594: 2.8667% ( 14) 00:08:52.629 11443.594 - 11494.006: 2.9959% ( 10) 00:08:52.629 11494.006 - 11544.418: 3.1508% ( 12) 00:08:52.629 11544.418 - 11594.831: 3.3316% ( 14) 00:08:52.629 11594.831 - 11645.243: 3.5382% ( 16) 00:08:52.629 11645.243 - 11695.655: 3.7061% ( 13) 00:08:52.629 11695.655 - 11746.068: 3.8481% ( 11) 00:08:52.629 11746.068 - 11796.480: 4.0935% ( 19) 00:08:52.629 11796.480 - 11846.892: 4.3001% ( 16) 00:08:52.629 11846.892 - 11897.305: 4.4938% ( 15) 00:08:52.629 11897.305 - 11947.717: 4.6875% ( 15) 00:08:52.629 11947.717 - 11998.129: 4.8941% ( 16) 00:08:52.629 11998.129 - 12048.542: 5.1007% ( 16) 00:08:52.629 12048.542 - 12098.954: 5.2686% ( 13) 00:08:52.629 12098.954 - 12149.366: 5.3977% ( 10) 00:08:52.629 12149.366 - 12199.778: 5.5398% ( 11) 00:08:52.629 12199.778 - 12250.191: 5.6818% ( 11) 00:08:52.629 12250.191 - 12300.603: 5.8239% ( 11) 00:08:52.629 12300.603 - 12351.015: 5.9530% ( 10) 00:08:52.629 12351.015 - 12401.428: 6.0950% ( 11) 00:08:52.629 12401.428 - 12451.840: 6.2629% ( 13) 00:08:52.629 12451.840 - 12502.252: 6.4179% ( 12) 00:08:52.629 12502.252 - 12552.665: 6.5857% ( 13) 00:08:52.629 12552.665 - 12603.077: 6.7407% ( 12) 00:08:52.629 12603.077 - 12653.489: 6.9344% ( 15) 00:08:52.629 12653.489 - 12703.902: 7.1410% ( 16) 00:08:52.629 12703.902 - 12754.314: 7.2831% ( 11) 00:08:52.629 12754.314 - 12804.726: 7.5284% ( 19) 00:08:52.629 12804.726 - 12855.138: 7.7996% ( 21) 00:08:52.629 12855.138 - 12905.551: 8.0708% ( 21) 00:08:52.629 12905.551 - 13006.375: 8.5873% ( 40) 00:08:52.629 13006.375 - 13107.200: 9.1684% ( 45) 00:08:52.629 13107.200 - 13208.025: 9.8915% ( 56) 00:08:52.629 13208.025 - 13308.849: 10.6921% ( 62) 00:08:52.629 13308.849 - 13409.674: 11.5315% ( 65) 00:08:52.629 13409.674 - 13510.498: 12.4354% ( 70) 00:08:52.629 13510.498 - 13611.323: 13.4039% ( 75) 00:08:52.629 13611.323 - 13712.148: 14.6823% ( 99) 00:08:52.629 13712.148 - 13812.972: 16.5160% ( 142) 00:08:52.629 13812.972 - 13913.797: 18.2980% ( 138) 00:08:52.629 13913.797 - 14014.622: 19.8735% ( 122) 00:08:52.629 14014.622 - 14115.446: 21.5780% ( 132) 00:08:52.629 14115.446 - 14216.271: 23.2825% ( 132) 00:08:52.629 14216.271 - 14317.095: 25.0517% ( 137) 00:08:52.629 14317.095 - 14417.920: 26.9628% ( 148) 00:08:52.629 14417.920 - 14518.745: 28.6932% ( 134) 00:08:52.629 14518.745 - 14619.569: 30.5656% ( 145) 00:08:52.629 14619.569 - 14720.394: 32.3218% ( 136) 00:08:52.629 14720.394 - 14821.218: 33.9360% ( 125) 00:08:52.629 14821.218 - 14922.043: 35.3564% ( 110) 00:08:52.629 14922.043 - 15022.868: 36.6348% ( 99) 00:08:52.629 15022.868 - 15123.692: 38.2361% ( 124) 00:08:52.629 15123.692 - 15224.517: 39.5790% ( 104) 00:08:52.629 15224.517 - 15325.342: 40.6250% ( 81) 00:08:52.629 15325.342 - 15426.166: 41.9034% ( 99) 00:08:52.629 15426.166 - 15526.991: 43.2206% ( 102) 00:08:52.629 15526.991 - 15627.815: 44.5894% ( 106) 00:08:52.629 15627.815 - 15728.640: 45.9711% ( 107) 00:08:52.629 15728.640 - 15829.465: 47.3011% ( 103) 00:08:52.629 15829.465 - 15930.289: 48.5925% ( 100) 00:08:52.629 15930.289 - 16031.114: 49.8838% ( 100) 00:08:52.629 16031.114 - 16131.938: 51.3688% ( 115) 00:08:52.629 16131.938 - 16232.763: 53.0088% ( 127) 00:08:52.629 16232.763 - 16333.588: 54.5584% ( 120) 00:08:52.629 16333.588 - 16434.412: 56.1338% ( 122) 00:08:52.629 16434.412 - 16535.237: 57.5930% ( 113) 00:08:52.629 16535.237 - 16636.062: 58.7810% ( 92) 00:08:52.629 16636.062 - 16736.886: 59.8786% ( 85) 00:08:52.629 16736.886 - 16837.711: 60.8729% ( 77) 00:08:52.629 16837.711 - 16938.535: 61.8285% ( 74) 00:08:52.629 16938.535 - 17039.360: 62.7454% ( 71) 00:08:52.629 17039.360 - 17140.185: 63.5976% ( 66) 00:08:52.629 17140.185 - 17241.009: 64.4112% ( 63) 00:08:52.629 17241.009 - 17341.834: 65.2505% ( 65) 00:08:52.629 17341.834 - 17442.658: 66.0382% ( 61) 00:08:52.629 17442.658 - 17543.483: 66.6322% ( 46) 00:08:52.629 17543.483 - 17644.308: 67.2650% ( 49) 00:08:52.629 17644.308 - 17745.132: 67.9106% ( 50) 00:08:52.629 17745.132 - 17845.957: 68.6338% ( 56) 00:08:52.629 17845.957 - 17946.782: 69.4861% ( 66) 00:08:52.629 17946.782 - 18047.606: 70.2479% ( 59) 00:08:52.629 18047.606 - 18148.431: 71.2035% ( 74) 00:08:52.629 18148.431 - 18249.255: 72.1591% ( 74) 00:08:52.629 18249.255 - 18350.080: 73.0372% ( 68) 00:08:52.629 18350.080 - 18450.905: 73.9928% ( 74) 00:08:52.629 18450.905 - 18551.729: 75.1679% ( 91) 00:08:52.629 18551.729 - 18652.554: 76.3559% ( 92) 00:08:52.629 18652.554 - 18753.378: 77.2856% ( 72) 00:08:52.629 18753.378 - 18854.203: 78.1508% ( 67) 00:08:52.629 18854.203 - 18955.028: 79.0677% ( 71) 00:08:52.629 18955.028 - 19055.852: 80.0620% ( 77) 00:08:52.629 19055.852 - 19156.677: 81.0305% ( 75) 00:08:52.629 19156.677 - 19257.502: 81.9731% ( 73) 00:08:52.629 19257.502 - 19358.326: 82.9158% ( 73) 00:08:52.629 19358.326 - 19459.151: 83.8326% ( 71) 00:08:52.629 19459.151 - 19559.975: 84.7495% ( 71) 00:08:52.629 19559.975 - 19660.800: 85.6792% ( 72) 00:08:52.629 19660.800 - 19761.625: 86.4540% ( 60) 00:08:52.629 19761.625 - 19862.449: 87.2934% ( 65) 00:08:52.629 19862.449 - 19963.274: 88.1327% ( 65) 00:08:52.629 19963.274 - 20064.098: 88.8946% ( 59) 00:08:52.629 20064.098 - 20164.923: 89.5532% ( 51) 00:08:52.629 20164.923 - 20265.748: 90.2247% ( 52) 00:08:52.629 20265.748 - 20366.572: 90.8704% ( 50) 00:08:52.629 20366.572 - 20467.397: 91.5677% ( 54) 00:08:52.629 20467.397 - 20568.222: 92.2650% ( 54) 00:08:52.629 20568.222 - 20669.046: 92.8202% ( 43) 00:08:52.629 20669.046 - 20769.871: 93.3755% ( 43) 00:08:52.629 20769.871 - 20870.695: 93.8275% ( 35) 00:08:52.629 20870.695 - 20971.520: 94.2278% ( 31) 00:08:52.629 20971.520 - 21072.345: 94.6152% ( 30) 00:08:52.629 21072.345 - 21173.169: 94.9122% ( 23) 00:08:52.629 21173.169 - 21273.994: 95.1188% ( 16) 00:08:52.629 21273.994 - 21374.818: 95.2738% ( 12) 00:08:52.629 21374.818 - 21475.643: 95.4158% ( 11) 00:08:52.629 21475.643 - 21576.468: 95.5449% ( 10) 00:08:52.629 21576.468 - 21677.292: 95.6870% ( 11) 00:08:52.629 21677.292 - 21778.117: 95.8549% ( 13) 00:08:52.629 21778.117 - 21878.942: 95.9840% ( 10) 00:08:52.629 21878.942 - 21979.766: 96.1260% ( 11) 00:08:52.629 21979.766 - 22080.591: 96.2939% ( 13) 00:08:52.629 22080.591 - 22181.415: 96.4747% ( 14) 00:08:52.629 22181.415 - 22282.240: 96.6167% ( 11) 00:08:52.629 22282.240 - 22383.065: 96.7459% ( 10) 00:08:52.629 22383.065 - 22483.889: 96.8492% ( 8) 00:08:52.629 22483.889 - 22584.714: 96.9783% ( 10) 00:08:52.629 22584.714 - 22685.538: 97.0687% ( 7) 00:08:52.629 22685.538 - 22786.363: 97.1849% ( 9) 00:08:52.629 22786.363 - 22887.188: 97.2753% ( 7) 00:08:52.629 22887.188 - 22988.012: 97.3399% ( 5) 00:08:52.629 22988.012 - 23088.837: 97.4044% ( 5) 00:08:52.629 23088.837 - 23189.662: 97.4948% ( 7) 00:08:52.629 23189.662 - 23290.486: 97.5723% ( 6) 00:08:52.629 23290.486 - 23391.311: 97.6627% ( 7) 00:08:52.629 23391.311 - 23492.135: 97.7402% ( 6) 00:08:52.629 23492.135 - 23592.960: 97.8306% ( 7) 00:08:52.629 23592.960 - 23693.785: 97.9081% ( 6) 00:08:52.629 23693.785 - 23794.609: 97.9985% ( 7) 00:08:52.629 23794.609 - 23895.434: 98.0759% ( 6) 00:08:52.629 23895.434 - 23996.258: 98.1405% ( 5) 00:08:52.629 23996.258 - 24097.083: 98.1663% ( 2) 00:08:52.629 24097.083 - 24197.908: 98.2051% ( 3) 00:08:52.629 24197.908 - 24298.732: 98.2438% ( 3) 00:08:52.629 24298.732 - 24399.557: 98.2825% ( 3) 00:08:52.629 24399.557 - 24500.382: 98.3213% ( 3) 00:08:52.629 24500.382 - 24601.206: 98.3471% ( 2) 00:08:52.629 28432.542 - 28634.191: 98.4504% ( 8) 00:08:52.629 28634.191 - 28835.840: 98.5537% ( 8) 00:08:52.629 28835.840 - 29037.489: 98.6699% ( 9) 00:08:52.629 29037.489 - 29239.138: 98.7732% ( 8) 00:08:52.629 29239.138 - 29440.788: 98.8895% ( 9) 00:08:52.629 29440.788 - 29642.437: 98.9928% ( 8) 00:08:52.629 29642.437 - 29844.086: 99.0961% ( 8) 00:08:52.629 29844.086 - 30045.735: 99.1736% ( 6) 00:08:52.629 36498.511 - 36700.160: 99.2252% ( 4) 00:08:52.629 36700.160 - 36901.809: 99.2898% ( 5) 00:08:52.629 36901.809 - 37103.458: 99.3673% ( 6) 00:08:52.629 37103.458 - 37305.108: 99.4318% ( 5) 00:08:52.629 37305.108 - 37506.757: 99.5093% ( 6) 00:08:52.629 37506.757 - 37708.406: 99.5868% ( 6) 00:08:52.629 37708.406 - 37910.055: 99.6513% ( 5) 00:08:52.629 37910.055 - 38111.705: 99.7288% ( 6) 00:08:52.629 38111.705 - 38313.354: 99.8063% ( 6) 00:08:52.629 38313.354 - 38515.003: 99.8838% ( 6) 00:08:52.629 38515.003 - 38716.652: 99.9483% ( 5) 00:08:52.629 38716.652 - 38918.302: 100.0000% ( 4) 00:08:52.629 00:08:52.629 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:52.629 ============================================================================== 00:08:52.629 Range in us Cumulative IO count 00:08:52.629 10132.874 - 10183.286: 0.0517% ( 4) 00:08:52.629 10183.286 - 10233.698: 0.1033% ( 4) 00:08:52.629 10233.698 - 10284.111: 0.1420% ( 3) 00:08:52.629 10284.111 - 10334.523: 0.1937% ( 4) 00:08:52.629 10334.523 - 10384.935: 0.2454% ( 4) 00:08:52.629 10384.935 - 10435.348: 0.2970% ( 4) 00:08:52.629 10435.348 - 10485.760: 0.4003% ( 8) 00:08:52.629 10485.760 - 10536.172: 0.4520% ( 4) 00:08:52.629 10536.172 - 10586.585: 0.5553% ( 8) 00:08:52.629 10586.585 - 10636.997: 0.6198% ( 5) 00:08:52.629 10636.997 - 10687.409: 0.6973% ( 6) 00:08:52.629 10687.409 - 10737.822: 0.7877% ( 7) 00:08:52.629 10737.822 - 10788.234: 0.9298% ( 11) 00:08:52.629 10788.234 - 10838.646: 1.0201% ( 7) 00:08:52.629 10838.646 - 10889.058: 1.0718% ( 4) 00:08:52.629 10889.058 - 10939.471: 1.1622% ( 7) 00:08:52.629 10939.471 - 10989.883: 1.2655% ( 8) 00:08:52.629 10989.883 - 11040.295: 1.3559% ( 7) 00:08:52.629 11040.295 - 11090.708: 1.5238% ( 13) 00:08:52.630 11090.708 - 11141.120: 1.6271% ( 8) 00:08:52.630 11141.120 - 11191.532: 1.7691% ( 11) 00:08:52.630 11191.532 - 11241.945: 1.9886% ( 17) 00:08:52.630 11241.945 - 11292.357: 2.1049% ( 9) 00:08:52.630 11292.357 - 11342.769: 2.3373% ( 18) 00:08:52.630 11342.769 - 11393.182: 2.4277% ( 7) 00:08:52.630 11393.182 - 11443.594: 2.6472% ( 17) 00:08:52.630 11443.594 - 11494.006: 2.7634% ( 9) 00:08:52.630 11494.006 - 11544.418: 3.0217% ( 20) 00:08:52.630 11544.418 - 11594.831: 3.0992% ( 6) 00:08:52.630 11594.831 - 11645.243: 3.3704% ( 21) 00:08:52.630 11645.243 - 11695.655: 3.4866% ( 9) 00:08:52.630 11695.655 - 11746.068: 3.6674% ( 14) 00:08:52.630 11746.068 - 11796.480: 3.8481% ( 14) 00:08:52.630 11796.480 - 11846.892: 3.9902% ( 11) 00:08:52.630 11846.892 - 11897.305: 4.1581% ( 13) 00:08:52.630 11897.305 - 11947.717: 4.2355% ( 6) 00:08:52.630 11947.717 - 11998.129: 4.3905% ( 12) 00:08:52.630 11998.129 - 12048.542: 4.5971% ( 16) 00:08:52.630 12048.542 - 12098.954: 4.8295% ( 18) 00:08:52.630 12098.954 - 12149.366: 4.9716% ( 11) 00:08:52.630 12149.366 - 12199.778: 5.2169% ( 19) 00:08:52.630 12199.778 - 12250.191: 5.3590% ( 11) 00:08:52.630 12250.191 - 12300.603: 5.6043% ( 19) 00:08:52.630 12300.603 - 12351.015: 5.8884% ( 22) 00:08:52.630 12351.015 - 12401.428: 6.0950% ( 16) 00:08:52.630 12401.428 - 12451.840: 6.4954% ( 31) 00:08:52.630 12451.840 - 12502.252: 6.6761% ( 14) 00:08:52.630 12502.252 - 12552.665: 6.8698% ( 15) 00:08:52.630 12552.665 - 12603.077: 7.1668% ( 23) 00:08:52.630 12603.077 - 12653.489: 7.3864% ( 17) 00:08:52.630 12653.489 - 12703.902: 7.6575% ( 21) 00:08:52.630 12703.902 - 12754.314: 7.9287% ( 21) 00:08:52.630 12754.314 - 12804.726: 8.3032% ( 29) 00:08:52.630 12804.726 - 12855.138: 8.5615% ( 20) 00:08:52.630 12855.138 - 12905.551: 8.8843% ( 25) 00:08:52.630 12905.551 - 13006.375: 9.6462% ( 59) 00:08:52.630 13006.375 - 13107.200: 10.3177% ( 52) 00:08:52.630 13107.200 - 13208.025: 10.9762% ( 51) 00:08:52.630 13208.025 - 13308.849: 11.7769% ( 62) 00:08:52.630 13308.849 - 13409.674: 12.7841% ( 78) 00:08:52.630 13409.674 - 13510.498: 13.7655% ( 76) 00:08:52.630 13510.498 - 13611.323: 14.9793% ( 94) 00:08:52.630 13611.323 - 13712.148: 16.3740% ( 108) 00:08:52.630 13712.148 - 13812.972: 17.9494% ( 122) 00:08:52.630 13812.972 - 13913.797: 19.4215% ( 114) 00:08:52.630 13913.797 - 14014.622: 20.8549% ( 111) 00:08:52.630 14014.622 - 14115.446: 22.3786% ( 118) 00:08:52.630 14115.446 - 14216.271: 23.9669% ( 123) 00:08:52.630 14216.271 - 14317.095: 25.4261% ( 113) 00:08:52.630 14317.095 - 14417.920: 27.0015% ( 122) 00:08:52.630 14417.920 - 14518.745: 28.5511% ( 120) 00:08:52.630 14518.745 - 14619.569: 30.2299% ( 130) 00:08:52.630 14619.569 - 14720.394: 31.7149% ( 115) 00:08:52.630 14720.394 - 14821.218: 33.4323% ( 133) 00:08:52.630 14821.218 - 14922.043: 34.7624% ( 103) 00:08:52.630 14922.043 - 15022.868: 36.3507% ( 123) 00:08:52.630 15022.868 - 15123.692: 37.4483% ( 85) 00:08:52.630 15123.692 - 15224.517: 38.6235% ( 91) 00:08:52.630 15224.517 - 15325.342: 39.7856% ( 90) 00:08:52.630 15325.342 - 15426.166: 40.8704% ( 84) 00:08:52.630 15426.166 - 15526.991: 42.0842% ( 94) 00:08:52.630 15526.991 - 15627.815: 43.2593% ( 91) 00:08:52.630 15627.815 - 15728.640: 44.6539% ( 108) 00:08:52.630 15728.640 - 15829.465: 46.0744% ( 110) 00:08:52.630 15829.465 - 15930.289: 47.4044% ( 103) 00:08:52.630 15930.289 - 16031.114: 48.8895% ( 115) 00:08:52.630 16031.114 - 16131.938: 50.0646% ( 91) 00:08:52.630 16131.938 - 16232.763: 51.3301% ( 98) 00:08:52.630 16232.763 - 16333.588: 53.0992% ( 137) 00:08:52.630 16333.588 - 16434.412: 54.3130% ( 94) 00:08:52.630 16434.412 - 16535.237: 55.8626% ( 120) 00:08:52.630 16535.237 - 16636.062: 57.3089% ( 112) 00:08:52.630 16636.062 - 16736.886: 58.8972% ( 123) 00:08:52.630 16736.886 - 16837.711: 60.2660% ( 106) 00:08:52.630 16837.711 - 16938.535: 61.4669% ( 93) 00:08:52.630 16938.535 - 17039.360: 62.6679% ( 93) 00:08:52.630 17039.360 - 17140.185: 63.7009% ( 80) 00:08:52.630 17140.185 - 17241.009: 64.6565% ( 74) 00:08:52.630 17241.009 - 17341.834: 65.6508% ( 77) 00:08:52.630 17341.834 - 17442.658: 66.4256% ( 60) 00:08:52.630 17442.658 - 17543.483: 67.2262% ( 62) 00:08:52.630 17543.483 - 17644.308: 68.0914% ( 67) 00:08:52.630 17644.308 - 17745.132: 68.8791% ( 61) 00:08:52.630 17745.132 - 17845.957: 69.8735% ( 77) 00:08:52.630 17845.957 - 17946.782: 70.4029% ( 41) 00:08:52.630 17946.782 - 18047.606: 71.2293% ( 64) 00:08:52.630 18047.606 - 18148.431: 72.2237% ( 77) 00:08:52.630 18148.431 - 18249.255: 73.0501% ( 64) 00:08:52.630 18249.255 - 18350.080: 74.1348% ( 84) 00:08:52.630 18350.080 - 18450.905: 75.3616% ( 95) 00:08:52.630 18450.905 - 18551.729: 76.4850% ( 87) 00:08:52.630 18551.729 - 18652.554: 77.3631% ( 68) 00:08:52.630 18652.554 - 18753.378: 78.2025% ( 65) 00:08:52.630 18753.378 - 18854.203: 79.1064% ( 70) 00:08:52.630 18854.203 - 18955.028: 79.9329% ( 64) 00:08:52.630 18955.028 - 19055.852: 80.9143% ( 76) 00:08:52.630 19055.852 - 19156.677: 81.8311% ( 71) 00:08:52.630 19156.677 - 19257.502: 82.8771% ( 81) 00:08:52.630 19257.502 - 19358.326: 83.9230% ( 81) 00:08:52.630 19358.326 - 19459.151: 84.6978% ( 60) 00:08:52.630 19459.151 - 19559.975: 85.1240% ( 33) 00:08:52.630 19559.975 - 19660.800: 85.9375% ( 63) 00:08:52.630 19660.800 - 19761.625: 86.6090% ( 52) 00:08:52.630 19761.625 - 19862.449: 87.4096% ( 62) 00:08:52.630 19862.449 - 19963.274: 87.9261% ( 40) 00:08:52.630 19963.274 - 20064.098: 88.5072% ( 45) 00:08:52.630 20064.098 - 20164.923: 89.2562% ( 58) 00:08:52.630 20164.923 - 20265.748: 89.9019% ( 50) 00:08:52.630 20265.748 - 20366.572: 90.4700% ( 44) 00:08:52.630 20366.572 - 20467.397: 90.9866% ( 40) 00:08:52.630 20467.397 - 20568.222: 91.4127% ( 33) 00:08:52.630 20568.222 - 20669.046: 92.2004% ( 61) 00:08:52.630 20669.046 - 20769.871: 92.7815% ( 45) 00:08:52.630 20769.871 - 20870.695: 93.2464% ( 36) 00:08:52.630 20870.695 - 20971.520: 93.6854% ( 34) 00:08:52.630 20971.520 - 21072.345: 93.9695% ( 22) 00:08:52.630 21072.345 - 21173.169: 94.6539% ( 53) 00:08:52.630 21173.169 - 21273.994: 94.8735% ( 17) 00:08:52.630 21273.994 - 21374.818: 95.1446% ( 21) 00:08:52.630 21374.818 - 21475.643: 95.2996% ( 12) 00:08:52.630 21475.643 - 21576.468: 95.4029% ( 8) 00:08:52.630 21576.468 - 21677.292: 95.5579% ( 12) 00:08:52.630 21677.292 - 21778.117: 95.6353% ( 6) 00:08:52.630 21778.117 - 21878.942: 95.8549% ( 17) 00:08:52.630 21878.942 - 21979.766: 95.9065% ( 4) 00:08:52.630 21979.766 - 22080.591: 96.1519% ( 19) 00:08:52.630 22080.591 - 22181.415: 96.2810% ( 10) 00:08:52.630 22181.415 - 22282.240: 96.5393% ( 20) 00:08:52.630 22282.240 - 22383.065: 96.6813% ( 11) 00:08:52.630 22383.065 - 22483.889: 96.8879% ( 16) 00:08:52.630 22483.889 - 22584.714: 97.0041% ( 9) 00:08:52.630 22584.714 - 22685.538: 97.1591% ( 12) 00:08:52.630 22685.538 - 22786.363: 97.2366% ( 6) 00:08:52.630 22786.363 - 22887.188: 97.3270% ( 7) 00:08:52.630 22887.188 - 22988.012: 97.3915% ( 5) 00:08:52.630 22988.012 - 23088.837: 97.4690% ( 6) 00:08:52.630 23088.837 - 23189.662: 97.5077% ( 3) 00:08:52.630 23189.662 - 23290.486: 97.6240% ( 9) 00:08:52.630 23290.486 - 23391.311: 97.6627% ( 3) 00:08:52.630 23391.311 - 23492.135: 97.7789% ( 9) 00:08:52.630 23492.135 - 23592.960: 97.8306% ( 4) 00:08:52.630 23592.960 - 23693.785: 97.9339% ( 8) 00:08:52.630 23693.785 - 23794.609: 97.9985% ( 5) 00:08:52.630 23794.609 - 23895.434: 98.1018% ( 8) 00:08:52.630 23895.434 - 23996.258: 98.1921% ( 7) 00:08:52.630 23996.258 - 24097.083: 98.2567% ( 5) 00:08:52.630 24097.083 - 24197.908: 98.3084% ( 4) 00:08:52.630 24197.908 - 24298.732: 98.3342% ( 2) 00:08:52.630 24298.732 - 24399.557: 98.3471% ( 1) 00:08:52.630 26819.348 - 27020.997: 98.3988% ( 4) 00:08:52.630 27020.997 - 27222.646: 98.4892% ( 7) 00:08:52.630 27222.646 - 27424.295: 98.5795% ( 7) 00:08:52.630 27424.295 - 27625.945: 98.6958% ( 9) 00:08:52.630 27625.945 - 27827.594: 98.7732% ( 6) 00:08:52.630 27827.594 - 28029.243: 98.8765% ( 8) 00:08:52.630 28029.243 - 28230.892: 98.9669% ( 7) 00:08:52.630 28230.892 - 28432.542: 99.0702% ( 8) 00:08:52.630 28432.542 - 28634.191: 99.1736% ( 8) 00:08:52.630 35490.265 - 35691.914: 99.2252% ( 4) 00:08:52.630 35691.914 - 35893.563: 99.2898% ( 5) 00:08:52.630 35893.563 - 36095.212: 99.3543% ( 5) 00:08:52.630 36095.212 - 36296.862: 99.4189% ( 5) 00:08:52.630 36296.862 - 36498.511: 99.4706% ( 4) 00:08:52.630 36498.511 - 36700.160: 99.5222% ( 4) 00:08:52.630 36700.160 - 36901.809: 99.5868% ( 5) 00:08:52.630 36901.809 - 37103.458: 99.6384% ( 4) 00:08:52.630 37103.458 - 37305.108: 99.7159% ( 6) 00:08:52.630 37305.108 - 37506.757: 99.7934% ( 6) 00:08:52.630 37506.757 - 37708.406: 99.8580% ( 5) 00:08:52.630 37708.406 - 37910.055: 99.9354% ( 6) 00:08:52.630 37910.055 - 38111.705: 99.9871% ( 4) 00:08:52.630 38111.705 - 38313.354: 100.0000% ( 1) 00:08:52.630 00:08:52.630 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:52.630 ============================================================================== 00:08:52.630 Range in us Cumulative IO count 00:08:52.630 9981.637 - 10032.049: 0.0387% ( 3) 00:08:52.630 10032.049 - 10082.462: 0.0646% ( 2) 00:08:52.630 10082.462 - 10132.874: 0.1162% ( 4) 00:08:52.630 10132.874 - 10183.286: 0.1937% ( 6) 00:08:52.630 10183.286 - 10233.698: 0.2583% ( 5) 00:08:52.630 10233.698 - 10284.111: 0.3228% ( 5) 00:08:52.630 10284.111 - 10334.523: 0.3874% ( 5) 00:08:52.630 10334.523 - 10384.935: 0.4520% ( 5) 00:08:52.630 10384.935 - 10435.348: 0.5165% ( 5) 00:08:52.630 10435.348 - 10485.760: 0.6069% ( 7) 00:08:52.630 10485.760 - 10536.172: 0.7231% ( 9) 00:08:52.630 10536.172 - 10586.585: 0.8135% ( 7) 00:08:52.630 10586.585 - 10636.997: 0.9168% ( 8) 00:08:52.630 10636.997 - 10687.409: 1.0201% ( 8) 00:08:52.630 10687.409 - 10737.822: 1.1364% ( 9) 00:08:52.630 10737.822 - 10788.234: 1.2268% ( 7) 00:08:52.630 10788.234 - 10838.646: 1.3301% ( 8) 00:08:52.630 10838.646 - 10889.058: 1.4334% ( 8) 00:08:52.630 10889.058 - 10939.471: 1.5367% ( 8) 00:08:52.630 10939.471 - 10989.883: 1.6400% ( 8) 00:08:52.630 10989.883 - 11040.295: 1.7562% ( 9) 00:08:52.631 11040.295 - 11090.708: 1.8466% ( 7) 00:08:52.631 11090.708 - 11141.120: 1.9628% ( 9) 00:08:52.631 11141.120 - 11191.532: 2.1178% ( 12) 00:08:52.631 11191.532 - 11241.945: 2.2986% ( 14) 00:08:52.631 11241.945 - 11292.357: 2.4277% ( 10) 00:08:52.631 11292.357 - 11342.769: 2.4923% ( 5) 00:08:52.631 11342.769 - 11393.182: 2.6214% ( 10) 00:08:52.631 11393.182 - 11443.594: 2.6989% ( 6) 00:08:52.631 11443.594 - 11494.006: 2.8409% ( 11) 00:08:52.631 11494.006 - 11544.418: 2.9571% ( 9) 00:08:52.631 11544.418 - 11594.831: 3.0992% ( 11) 00:08:52.631 11594.831 - 11645.243: 3.2025% ( 8) 00:08:52.631 11645.243 - 11695.655: 3.3187% ( 9) 00:08:52.631 11695.655 - 11746.068: 3.4091% ( 7) 00:08:52.631 11746.068 - 11796.480: 3.5253% ( 9) 00:08:52.631 11796.480 - 11846.892: 3.6674% ( 11) 00:08:52.631 11846.892 - 11897.305: 3.8740% ( 16) 00:08:52.631 11897.305 - 11947.717: 4.0160% ( 11) 00:08:52.631 11947.717 - 11998.129: 4.1839% ( 13) 00:08:52.631 11998.129 - 12048.542: 4.3905% ( 16) 00:08:52.631 12048.542 - 12098.954: 4.6100% ( 17) 00:08:52.631 12098.954 - 12149.366: 4.8166% ( 16) 00:08:52.631 12149.366 - 12199.778: 5.0232% ( 16) 00:08:52.631 12199.778 - 12250.191: 5.2299% ( 16) 00:08:52.631 12250.191 - 12300.603: 5.4494% ( 17) 00:08:52.631 12300.603 - 12351.015: 5.6560% ( 16) 00:08:52.631 12351.015 - 12401.428: 5.8626% ( 16) 00:08:52.631 12401.428 - 12451.840: 6.0821% ( 17) 00:08:52.631 12451.840 - 12502.252: 6.3275% ( 19) 00:08:52.631 12502.252 - 12552.665: 6.5599% ( 18) 00:08:52.631 12552.665 - 12603.077: 6.8311% ( 21) 00:08:52.631 12603.077 - 12653.489: 7.0894% ( 20) 00:08:52.631 12653.489 - 12703.902: 7.3347% ( 19) 00:08:52.631 12703.902 - 12754.314: 7.5801% ( 19) 00:08:52.631 12754.314 - 12804.726: 7.8512% ( 21) 00:08:52.631 12804.726 - 12855.138: 8.1353% ( 22) 00:08:52.631 12855.138 - 12905.551: 8.4711% ( 26) 00:08:52.631 12905.551 - 13006.375: 9.1296% ( 51) 00:08:52.631 13006.375 - 13107.200: 9.8657% ( 57) 00:08:52.631 13107.200 - 13208.025: 10.5759% ( 55) 00:08:52.631 13208.025 - 13308.849: 11.3120% ( 57) 00:08:52.631 13308.849 - 13409.674: 12.1901% ( 68) 00:08:52.631 13409.674 - 13510.498: 13.1457% ( 74) 00:08:52.631 13510.498 - 13611.323: 14.2562% ( 86) 00:08:52.631 13611.323 - 13712.148: 15.5992% ( 104) 00:08:52.631 13712.148 - 13812.972: 17.0971% ( 116) 00:08:52.631 13812.972 - 13913.797: 18.8920% ( 139) 00:08:52.631 13913.797 - 14014.622: 20.6224% ( 134) 00:08:52.631 14014.622 - 14115.446: 22.2624% ( 127) 00:08:52.631 14115.446 - 14216.271: 23.8765% ( 125) 00:08:52.631 14216.271 - 14317.095: 25.7231% ( 143) 00:08:52.631 14317.095 - 14417.920: 27.5439% ( 141) 00:08:52.631 14417.920 - 14518.745: 29.4421% ( 147) 00:08:52.631 14518.745 - 14619.569: 31.0176% ( 122) 00:08:52.631 14619.569 - 14720.394: 32.3605% ( 104) 00:08:52.631 14720.394 - 14821.218: 33.7681% ( 109) 00:08:52.631 14821.218 - 14922.043: 35.1756% ( 109) 00:08:52.631 14922.043 - 15022.868: 36.5444% ( 106) 00:08:52.631 15022.868 - 15123.692: 38.0811% ( 119) 00:08:52.631 15123.692 - 15224.517: 39.5661% ( 115) 00:08:52.631 15224.517 - 15325.342: 41.0382% ( 114) 00:08:52.631 15325.342 - 15426.166: 42.3941% ( 105) 00:08:52.631 15426.166 - 15526.991: 43.6338% ( 96) 00:08:52.631 15526.991 - 15627.815: 45.0671% ( 111) 00:08:52.631 15627.815 - 15728.640: 46.4101% ( 104) 00:08:52.631 15728.640 - 15829.465: 47.7014% ( 100) 00:08:52.631 15829.465 - 15930.289: 48.8895% ( 92) 00:08:52.631 15930.289 - 16031.114: 50.1162% ( 95) 00:08:52.631 16031.114 - 16131.938: 51.4075% ( 100) 00:08:52.631 16131.938 - 16232.763: 52.6601% ( 97) 00:08:52.631 16232.763 - 16333.588: 53.9256% ( 98) 00:08:52.631 16333.588 - 16434.412: 55.0749% ( 89) 00:08:52.631 16434.412 - 16535.237: 56.1983% ( 87) 00:08:52.631 16535.237 - 16636.062: 57.2314% ( 80) 00:08:52.631 16636.062 - 16736.886: 58.4065% ( 91) 00:08:52.631 16736.886 - 16837.711: 59.5687% ( 90) 00:08:52.631 16837.711 - 16938.535: 60.6147% ( 81) 00:08:52.631 16938.535 - 17039.360: 61.6477% ( 80) 00:08:52.631 17039.360 - 17140.185: 62.5129% ( 67) 00:08:52.631 17140.185 - 17241.009: 63.3652% ( 66) 00:08:52.631 17241.009 - 17341.834: 64.2045% ( 65) 00:08:52.631 17341.834 - 17442.658: 64.8889% ( 53) 00:08:52.631 17442.658 - 17543.483: 65.5863% ( 54) 00:08:52.631 17543.483 - 17644.308: 66.2577% ( 52) 00:08:52.631 17644.308 - 17745.132: 66.9551% ( 54) 00:08:52.631 17745.132 - 17845.957: 67.7040% ( 58) 00:08:52.631 17845.957 - 17946.782: 68.6338% ( 72) 00:08:52.631 17946.782 - 18047.606: 69.6539% ( 79) 00:08:52.631 18047.606 - 18148.431: 70.6353% ( 76) 00:08:52.631 18148.431 - 18249.255: 71.7071% ( 83) 00:08:52.631 18249.255 - 18350.080: 72.8177% ( 86) 00:08:52.631 18350.080 - 18450.905: 73.8249% ( 78) 00:08:52.631 18450.905 - 18551.729: 74.8321% ( 78) 00:08:52.631 18551.729 - 18652.554: 75.8523% ( 79) 00:08:52.631 18652.554 - 18753.378: 76.8337% ( 76) 00:08:52.631 18753.378 - 18854.203: 77.7893% ( 74) 00:08:52.631 18854.203 - 18955.028: 78.7448% ( 74) 00:08:52.631 18955.028 - 19055.852: 79.7004% ( 74) 00:08:52.631 19055.852 - 19156.677: 80.5785% ( 68) 00:08:52.631 19156.677 - 19257.502: 81.3791% ( 62) 00:08:52.631 19257.502 - 19358.326: 82.2056% ( 64) 00:08:52.631 19358.326 - 19459.151: 83.1999% ( 77) 00:08:52.631 19459.151 - 19559.975: 84.2588% ( 82) 00:08:52.631 19559.975 - 19660.800: 85.2273% ( 75) 00:08:52.631 19660.800 - 19761.625: 86.1699% ( 73) 00:08:52.631 19761.625 - 19862.449: 87.1255% ( 74) 00:08:52.631 19862.449 - 19963.274: 88.1069% ( 76) 00:08:52.631 19963.274 - 20064.098: 89.0883% ( 76) 00:08:52.631 20064.098 - 20164.923: 90.0439% ( 74) 00:08:52.631 20164.923 - 20265.748: 90.9349% ( 69) 00:08:52.631 20265.748 - 20366.572: 91.7872% ( 66) 00:08:52.631 20366.572 - 20467.397: 92.4199% ( 49) 00:08:52.631 20467.397 - 20568.222: 92.9494% ( 41) 00:08:52.631 20568.222 - 20669.046: 93.5176% ( 44) 00:08:52.631 20669.046 - 20769.871: 94.0341% ( 40) 00:08:52.631 20769.871 - 20870.695: 94.4990% ( 36) 00:08:52.631 20870.695 - 20971.520: 94.9768% ( 37) 00:08:52.631 20971.520 - 21072.345: 95.3512% ( 29) 00:08:52.631 21072.345 - 21173.169: 95.6741% ( 25) 00:08:52.631 21173.169 - 21273.994: 95.8807% ( 16) 00:08:52.631 21273.994 - 21374.818: 96.1002% ( 17) 00:08:52.631 21374.818 - 21475.643: 96.2810% ( 14) 00:08:52.631 21475.643 - 21576.468: 96.4360% ( 12) 00:08:52.631 21576.468 - 21677.292: 96.5909% ( 12) 00:08:52.631 21677.292 - 21778.117: 96.7330% ( 11) 00:08:52.631 21778.117 - 21878.942: 96.8750% ( 11) 00:08:52.631 21878.942 - 21979.766: 97.0300% ( 12) 00:08:52.631 21979.766 - 22080.591: 97.1720% ( 11) 00:08:52.631 22080.591 - 22181.415: 97.3140% ( 11) 00:08:52.631 22181.415 - 22282.240: 97.4044% ( 7) 00:08:52.631 22282.240 - 22383.065: 97.4561% ( 4) 00:08:52.631 22383.065 - 22483.889: 97.5077% ( 4) 00:08:52.631 22483.889 - 22584.714: 97.5723% ( 5) 00:08:52.631 22584.714 - 22685.538: 97.6369% ( 5) 00:08:52.631 22685.538 - 22786.363: 97.6756% ( 3) 00:08:52.631 22786.363 - 22887.188: 97.7144% ( 3) 00:08:52.631 22887.188 - 22988.012: 97.7531% ( 3) 00:08:52.631 22988.012 - 23088.837: 97.8048% ( 4) 00:08:52.631 23088.837 - 23189.662: 97.8435% ( 3) 00:08:52.631 23189.662 - 23290.486: 97.8951% ( 4) 00:08:52.631 23290.486 - 23391.311: 97.9339% ( 3) 00:08:52.631 23391.311 - 23492.135: 97.9726% ( 3) 00:08:52.631 23492.135 - 23592.960: 98.0114% ( 3) 00:08:52.631 23592.960 - 23693.785: 98.0630% ( 4) 00:08:52.631 23693.785 - 23794.609: 98.1018% ( 3) 00:08:52.631 23794.609 - 23895.434: 98.1534% ( 4) 00:08:52.631 23895.434 - 23996.258: 98.1921% ( 3) 00:08:52.631 23996.258 - 24097.083: 98.2309% ( 3) 00:08:52.631 24097.083 - 24197.908: 98.2825% ( 4) 00:08:52.631 24197.908 - 24298.732: 98.3213% ( 3) 00:08:52.631 24298.732 - 24399.557: 98.3471% ( 2) 00:08:52.631 25508.628 - 25609.452: 98.3729% ( 2) 00:08:52.631 25609.452 - 25710.277: 98.4246% ( 4) 00:08:52.631 25710.277 - 25811.102: 98.4633% ( 3) 00:08:52.631 25811.102 - 26012.751: 98.5795% ( 9) 00:08:52.631 26012.751 - 26214.400: 98.6829% ( 8) 00:08:52.631 26214.400 - 26416.049: 98.7862% ( 8) 00:08:52.631 26416.049 - 26617.698: 98.9024% ( 9) 00:08:52.631 26617.698 - 26819.348: 99.0057% ( 8) 00:08:52.631 26819.348 - 27020.997: 99.1090% ( 8) 00:08:52.631 27020.997 - 27222.646: 99.1736% ( 5) 00:08:52.631 34683.668 - 34885.317: 99.2381% ( 5) 00:08:52.631 34885.317 - 35086.966: 99.3027% ( 5) 00:08:52.631 35086.966 - 35288.615: 99.3802% ( 6) 00:08:52.631 35288.615 - 35490.265: 99.4447% ( 5) 00:08:52.631 35490.265 - 35691.914: 99.5093% ( 5) 00:08:52.631 35691.914 - 35893.563: 99.5868% ( 6) 00:08:52.631 35893.563 - 36095.212: 99.6643% ( 6) 00:08:52.632 36095.212 - 36296.862: 99.7288% ( 5) 00:08:52.632 36296.862 - 36498.511: 99.7934% ( 5) 00:08:52.632 36498.511 - 36700.160: 99.8709% ( 6) 00:08:52.632 36700.160 - 36901.809: 99.9483% ( 6) 00:08:52.632 36901.809 - 37103.458: 100.0000% ( 4) 00:08:52.632 00:08:52.632 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:52.632 ============================================================================== 00:08:52.632 Range in us Cumulative IO count 00:08:52.632 9376.689 - 9427.102: 0.0258% ( 2) 00:08:52.632 9427.102 - 9477.514: 0.0517% ( 2) 00:08:52.632 9477.514 - 9527.926: 0.0904% ( 3) 00:08:52.632 9527.926 - 9578.338: 0.1162% ( 2) 00:08:52.632 9578.338 - 9628.751: 0.1550% ( 3) 00:08:52.632 9628.751 - 9679.163: 0.1808% ( 2) 00:08:52.632 9679.163 - 9729.575: 0.2066% ( 2) 00:08:52.632 9729.575 - 9779.988: 0.2324% ( 2) 00:08:52.632 9779.988 - 9830.400: 0.2583% ( 2) 00:08:52.632 9830.400 - 9880.812: 0.2712% ( 1) 00:08:52.632 9880.812 - 9931.225: 0.2970% ( 2) 00:08:52.632 9931.225 - 9981.637: 0.3228% ( 2) 00:08:52.632 9981.637 - 10032.049: 0.3357% ( 1) 00:08:52.632 10032.049 - 10082.462: 0.3616% ( 2) 00:08:52.632 10082.462 - 10132.874: 0.3874% ( 2) 00:08:52.632 10132.874 - 10183.286: 0.4132% ( 2) 00:08:52.632 10183.286 - 10233.698: 0.4261% ( 1) 00:08:52.632 10233.698 - 10284.111: 0.4520% ( 2) 00:08:52.632 10284.111 - 10334.523: 0.4649% ( 1) 00:08:52.632 10334.523 - 10384.935: 0.4907% ( 2) 00:08:52.632 10384.935 - 10435.348: 0.5165% ( 2) 00:08:52.632 10435.348 - 10485.760: 0.5294% ( 1) 00:08:52.632 10485.760 - 10536.172: 0.5553% ( 2) 00:08:52.632 10536.172 - 10586.585: 0.6198% ( 5) 00:08:52.632 10586.585 - 10636.997: 0.7231% ( 8) 00:08:52.632 10636.997 - 10687.409: 0.8135% ( 7) 00:08:52.632 10687.409 - 10737.822: 0.9039% ( 7) 00:08:52.632 10737.822 - 10788.234: 0.9943% ( 7) 00:08:52.632 10788.234 - 10838.646: 1.0847% ( 7) 00:08:52.632 10838.646 - 10889.058: 1.1751% ( 7) 00:08:52.632 10889.058 - 10939.471: 1.2784% ( 8) 00:08:52.632 10939.471 - 10989.883: 1.3817% ( 8) 00:08:52.632 10989.883 - 11040.295: 1.4979% ( 9) 00:08:52.632 11040.295 - 11090.708: 1.6529% ( 12) 00:08:52.632 11090.708 - 11141.120: 1.8208% ( 13) 00:08:52.632 11141.120 - 11191.532: 1.9628% ( 11) 00:08:52.632 11191.532 - 11241.945: 2.1049% ( 11) 00:08:52.632 11241.945 - 11292.357: 2.2727% ( 13) 00:08:52.632 11292.357 - 11342.769: 2.4148% ( 11) 00:08:52.632 11342.769 - 11393.182: 2.5697% ( 12) 00:08:52.632 11393.182 - 11443.594: 2.7634% ( 15) 00:08:52.632 11443.594 - 11494.006: 2.9571% ( 15) 00:08:52.632 11494.006 - 11544.418: 3.1896% ( 18) 00:08:52.632 11544.418 - 11594.831: 3.4220% ( 18) 00:08:52.632 11594.831 - 11645.243: 3.6415% ( 17) 00:08:52.632 11645.243 - 11695.655: 3.8740% ( 18) 00:08:52.632 11695.655 - 11746.068: 4.1322% ( 20) 00:08:52.632 11746.068 - 11796.480: 4.3001% ( 13) 00:08:52.632 11796.480 - 11846.892: 4.5067% ( 16) 00:08:52.632 11846.892 - 11897.305: 4.7004% ( 15) 00:08:52.632 11897.305 - 11947.717: 4.8683% ( 13) 00:08:52.632 11947.717 - 11998.129: 5.0878% ( 17) 00:08:52.632 11998.129 - 12048.542: 5.2686% ( 14) 00:08:52.632 12048.542 - 12098.954: 5.4494% ( 14) 00:08:52.632 12098.954 - 12149.366: 5.6947% ( 19) 00:08:52.632 12149.366 - 12199.778: 5.8755% ( 14) 00:08:52.632 12199.778 - 12250.191: 6.0176% ( 11) 00:08:52.632 12250.191 - 12300.603: 6.1725% ( 12) 00:08:52.632 12300.603 - 12351.015: 6.3275% ( 12) 00:08:52.632 12351.015 - 12401.428: 6.4437% ( 9) 00:08:52.632 12401.428 - 12451.840: 6.5728% ( 10) 00:08:52.632 12451.840 - 12502.252: 6.6890% ( 9) 00:08:52.632 12502.252 - 12552.665: 6.8440% ( 12) 00:08:52.632 12552.665 - 12603.077: 7.0506% ( 16) 00:08:52.632 12603.077 - 12653.489: 7.2185% ( 13) 00:08:52.632 12653.489 - 12703.902: 7.5413% ( 25) 00:08:52.632 12703.902 - 12754.314: 7.7608% ( 17) 00:08:52.632 12754.314 - 12804.726: 7.9804% ( 17) 00:08:52.632 12804.726 - 12855.138: 8.1999% ( 17) 00:08:52.632 12855.138 - 12905.551: 8.4582% ( 20) 00:08:52.632 12905.551 - 13006.375: 9.1038% ( 50) 00:08:52.632 13006.375 - 13107.200: 9.7237% ( 48) 00:08:52.632 13107.200 - 13208.025: 10.4210% ( 54) 00:08:52.632 13208.025 - 13308.849: 11.3378% ( 71) 00:08:52.632 13308.849 - 13409.674: 12.3192% ( 76) 00:08:52.632 13409.674 - 13510.498: 13.4298% ( 86) 00:08:52.632 13510.498 - 13611.323: 14.7727% ( 104) 00:08:52.632 13611.323 - 13712.148: 16.1932% ( 110) 00:08:52.632 13712.148 - 13812.972: 17.8977% ( 132) 00:08:52.632 13812.972 - 13913.797: 19.6410% ( 135) 00:08:52.632 13913.797 - 14014.622: 21.4230% ( 138) 00:08:52.632 14014.622 - 14115.446: 23.1276% ( 132) 00:08:52.632 14115.446 - 14216.271: 24.8709% ( 135) 00:08:52.632 14216.271 - 14317.095: 26.4463% ( 122) 00:08:52.632 14317.095 - 14417.920: 27.9830% ( 119) 00:08:52.632 14417.920 - 14518.745: 29.6617% ( 130) 00:08:52.632 14518.745 - 14619.569: 31.2629% ( 124) 00:08:52.632 14619.569 - 14720.394: 32.8254% ( 121) 00:08:52.632 14720.394 - 14821.218: 34.3621% ( 119) 00:08:52.632 14821.218 - 14922.043: 35.7955% ( 111) 00:08:52.632 14922.043 - 15022.868: 37.2546% ( 113) 00:08:52.632 15022.868 - 15123.692: 38.6622% ( 109) 00:08:52.632 15123.692 - 15224.517: 40.0826% ( 110) 00:08:52.632 15224.517 - 15325.342: 41.4644% ( 107) 00:08:52.632 15325.342 - 15426.166: 42.7428% ( 99) 00:08:52.632 15426.166 - 15526.991: 44.0341% ( 100) 00:08:52.632 15526.991 - 15627.815: 45.2996% ( 98) 00:08:52.632 15627.815 - 15728.640: 46.5263% ( 95) 00:08:52.632 15728.640 - 15829.465: 47.6885% ( 90) 00:08:52.632 15829.465 - 15930.289: 48.8765% ( 92) 00:08:52.632 15930.289 - 16031.114: 49.8967% ( 79) 00:08:52.632 16031.114 - 16131.938: 50.9943% ( 85) 00:08:52.632 16131.938 - 16232.763: 52.0790% ( 84) 00:08:52.632 16232.763 - 16333.588: 53.2154% ( 88) 00:08:52.632 16333.588 - 16434.412: 54.2743% ( 82) 00:08:52.632 16434.412 - 16535.237: 55.3073% ( 80) 00:08:52.632 16535.237 - 16636.062: 56.3920% ( 84) 00:08:52.632 16636.062 - 16736.886: 57.5671% ( 91) 00:08:52.632 16736.886 - 16837.711: 58.7035% ( 88) 00:08:52.632 16837.711 - 16938.535: 59.8140% ( 86) 00:08:52.632 16938.535 - 17039.360: 60.9633% ( 89) 00:08:52.632 17039.360 - 17140.185: 62.2546% ( 100) 00:08:52.632 17140.185 - 17241.009: 63.3264% ( 83) 00:08:52.632 17241.009 - 17341.834: 64.2175% ( 69) 00:08:52.632 17341.834 - 17442.658: 64.9535% ( 57) 00:08:52.632 17442.658 - 17543.483: 65.7154% ( 59) 00:08:52.632 17543.483 - 17644.308: 66.6581% ( 73) 00:08:52.632 17644.308 - 17745.132: 67.5878% ( 72) 00:08:52.632 17745.132 - 17845.957: 68.5434% ( 74) 00:08:52.632 17845.957 - 17946.782: 69.3957% ( 66) 00:08:52.632 17946.782 - 18047.606: 70.2350% ( 65) 00:08:52.632 18047.606 - 18148.431: 71.1777% ( 73) 00:08:52.632 18148.431 - 18249.255: 72.2882% ( 86) 00:08:52.632 18249.255 - 18350.080: 73.4504% ( 90) 00:08:52.632 18350.080 - 18450.905: 74.5222% ( 83) 00:08:52.632 18450.905 - 18551.729: 75.7102% ( 92) 00:08:52.632 18551.729 - 18652.554: 76.8982% ( 92) 00:08:52.632 18652.554 - 18753.378: 78.0217% ( 87) 00:08:52.632 18753.378 - 18854.203: 79.0548% ( 80) 00:08:52.632 18854.203 - 18955.028: 80.0491% ( 77) 00:08:52.632 18955.028 - 19055.852: 81.0305% ( 76) 00:08:52.632 19055.852 - 19156.677: 81.8957% ( 67) 00:08:52.632 19156.677 - 19257.502: 82.7996% ( 70) 00:08:52.632 19257.502 - 19358.326: 83.5873% ( 61) 00:08:52.632 19358.326 - 19459.151: 84.3233% ( 57) 00:08:52.632 19459.151 - 19559.975: 85.0852% ( 59) 00:08:52.632 19559.975 - 19660.800: 86.0537% ( 75) 00:08:52.632 19660.800 - 19761.625: 86.9318% ( 68) 00:08:52.632 19761.625 - 19862.449: 87.7970% ( 67) 00:08:52.632 19862.449 - 19963.274: 88.6364% ( 65) 00:08:52.632 19963.274 - 20064.098: 89.3466% ( 55) 00:08:52.632 20064.098 - 20164.923: 90.0052% ( 51) 00:08:52.632 20164.923 - 20265.748: 90.6767% ( 52) 00:08:52.632 20265.748 - 20366.572: 91.2836% ( 47) 00:08:52.632 20366.572 - 20467.397: 91.9034% ( 48) 00:08:52.632 20467.397 - 20568.222: 92.4329% ( 41) 00:08:52.632 20568.222 - 20669.046: 92.9106% ( 37) 00:08:52.632 20669.046 - 20769.871: 93.3239% ( 32) 00:08:52.632 20769.871 - 20870.695: 93.7371% ( 32) 00:08:52.632 20870.695 - 20971.520: 94.0857% ( 27) 00:08:52.632 20971.520 - 21072.345: 94.4215% ( 26) 00:08:52.632 21072.345 - 21173.169: 94.7443% ( 25) 00:08:52.632 21173.169 - 21273.994: 95.0284% ( 22) 00:08:52.632 21273.994 - 21374.818: 95.1963% ( 13) 00:08:52.632 21374.818 - 21475.643: 95.4158% ( 17) 00:08:52.632 21475.643 - 21576.468: 95.6224% ( 16) 00:08:52.632 21576.468 - 21677.292: 95.8161% ( 15) 00:08:52.632 21677.292 - 21778.117: 96.0227% ( 16) 00:08:52.632 21778.117 - 21878.942: 96.2035% ( 14) 00:08:52.632 21878.942 - 21979.766: 96.3585% ( 12) 00:08:52.632 21979.766 - 22080.591: 96.5134% ( 12) 00:08:52.632 22080.591 - 22181.415: 96.6684% ( 12) 00:08:52.632 22181.415 - 22282.240: 96.8104% ( 11) 00:08:52.632 22282.240 - 22383.065: 96.9267% ( 9) 00:08:52.632 22383.065 - 22483.889: 97.0429% ( 9) 00:08:52.632 22483.889 - 22584.714: 97.1333% ( 7) 00:08:52.632 22584.714 - 22685.538: 97.2107% ( 6) 00:08:52.632 22685.538 - 22786.363: 97.2753% ( 5) 00:08:52.632 22786.363 - 22887.188: 97.3528% ( 6) 00:08:52.632 22887.188 - 22988.012: 97.3915% ( 3) 00:08:52.632 22988.012 - 23088.837: 97.4561% ( 5) 00:08:52.632 23088.837 - 23189.662: 97.5207% ( 5) 00:08:52.632 23189.662 - 23290.486: 97.5852% ( 5) 00:08:52.632 23290.486 - 23391.311: 97.6627% ( 6) 00:08:52.632 23391.311 - 23492.135: 97.7273% ( 5) 00:08:52.632 23492.135 - 23592.960: 97.7918% ( 5) 00:08:52.632 23592.960 - 23693.785: 97.8306% ( 3) 00:08:52.632 23693.785 - 23794.609: 97.8951% ( 5) 00:08:52.632 23794.609 - 23895.434: 97.9468% ( 4) 00:08:52.632 23895.434 - 23996.258: 97.9985% ( 4) 00:08:52.632 23996.258 - 24097.083: 98.0501% ( 4) 00:08:52.632 24097.083 - 24197.908: 98.1018% ( 4) 00:08:52.632 24197.908 - 24298.732: 98.1534% ( 4) 00:08:52.632 24298.732 - 24399.557: 98.2051% ( 4) 00:08:52.632 24399.557 - 24500.382: 98.2438% ( 3) 00:08:52.632 24500.382 - 24601.206: 98.2825% ( 3) 00:08:52.632 24601.206 - 24702.031: 98.3213% ( 3) 00:08:52.633 24702.031 - 24802.855: 98.3342% ( 1) 00:08:52.633 24802.855 - 24903.680: 98.3471% ( 1) 00:08:52.633 24903.680 - 25004.505: 98.3988% ( 4) 00:08:52.633 25004.505 - 25105.329: 98.4375% ( 3) 00:08:52.633 25105.329 - 25206.154: 98.4892% ( 4) 00:08:52.633 25206.154 - 25306.978: 98.5408% ( 4) 00:08:52.633 25306.978 - 25407.803: 98.5925% ( 4) 00:08:52.633 25407.803 - 25508.628: 98.6441% ( 4) 00:08:52.633 25508.628 - 25609.452: 98.6958% ( 4) 00:08:52.633 25609.452 - 25710.277: 98.7603% ( 5) 00:08:52.633 25710.277 - 25811.102: 98.8120% ( 4) 00:08:52.633 25811.102 - 26012.751: 98.9153% ( 8) 00:08:52.633 26012.751 - 26214.400: 99.0186% ( 8) 00:08:52.633 26214.400 - 26416.049: 99.1219% ( 8) 00:08:52.633 26416.049 - 26617.698: 99.1736% ( 4) 00:08:52.633 33675.422 - 33877.071: 99.1994% ( 2) 00:08:52.633 33877.071 - 34078.720: 99.2769% ( 6) 00:08:52.633 34078.720 - 34280.369: 99.3414% ( 5) 00:08:52.633 34280.369 - 34482.018: 99.4189% ( 6) 00:08:52.633 34482.018 - 34683.668: 99.4835% ( 5) 00:08:52.633 34683.668 - 34885.317: 99.5351% ( 4) 00:08:52.633 34885.317 - 35086.966: 99.5868% ( 4) 00:08:52.633 35086.966 - 35288.615: 99.6513% ( 5) 00:08:52.633 35288.615 - 35490.265: 99.7288% ( 6) 00:08:52.633 35490.265 - 35691.914: 99.8063% ( 6) 00:08:52.633 35691.914 - 35893.563: 99.8709% ( 5) 00:08:52.633 35893.563 - 36095.212: 99.9483% ( 6) 00:08:52.633 36095.212 - 36296.862: 100.0000% ( 4) 00:08:52.633 00:08:52.633 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:52.633 ============================================================================== 00:08:52.633 Range in us Cumulative IO count 00:08:52.633 9527.926 - 9578.338: 0.0512% ( 4) 00:08:52.633 9578.338 - 9628.751: 0.0768% ( 2) 00:08:52.633 9628.751 - 9679.163: 0.1153% ( 3) 00:08:52.633 9679.163 - 9729.575: 0.1665% ( 4) 00:08:52.633 9729.575 - 9779.988: 0.1793% ( 1) 00:08:52.633 9779.988 - 9830.400: 0.1921% ( 1) 00:08:52.633 9830.400 - 9880.812: 0.2049% ( 1) 00:08:52.633 9880.812 - 9931.225: 0.2305% ( 2) 00:08:52.633 9931.225 - 9981.637: 0.2561% ( 2) 00:08:52.633 9981.637 - 10032.049: 0.2818% ( 2) 00:08:52.633 10032.049 - 10082.462: 0.3074% ( 2) 00:08:52.633 10082.462 - 10132.874: 0.3330% ( 2) 00:08:52.633 10132.874 - 10183.286: 0.3586% ( 2) 00:08:52.633 10183.286 - 10233.698: 0.3842% ( 2) 00:08:52.633 10233.698 - 10284.111: 0.4098% ( 2) 00:08:52.633 10284.111 - 10334.523: 0.4355% ( 2) 00:08:52.633 10334.523 - 10384.935: 0.4611% ( 2) 00:08:52.633 10384.935 - 10435.348: 0.4867% ( 2) 00:08:52.633 10435.348 - 10485.760: 0.5251% ( 3) 00:08:52.633 10485.760 - 10536.172: 0.5891% ( 5) 00:08:52.633 10536.172 - 10586.585: 0.6404% ( 4) 00:08:52.633 10586.585 - 10636.997: 0.6916% ( 4) 00:08:52.633 10636.997 - 10687.409: 0.7812% ( 7) 00:08:52.633 10687.409 - 10737.822: 0.9221% ( 11) 00:08:52.633 10737.822 - 10788.234: 1.0374% ( 9) 00:08:52.633 10788.234 - 10838.646: 1.1655% ( 10) 00:08:52.633 10838.646 - 10889.058: 1.3064% ( 11) 00:08:52.633 10889.058 - 10939.471: 1.4344% ( 10) 00:08:52.633 10939.471 - 10989.883: 1.5497% ( 9) 00:08:52.633 10989.883 - 11040.295: 1.7162% ( 13) 00:08:52.633 11040.295 - 11090.708: 1.8827% ( 13) 00:08:52.633 11090.708 - 11141.120: 2.0236% ( 11) 00:08:52.633 11141.120 - 11191.532: 2.1388% ( 9) 00:08:52.633 11191.532 - 11241.945: 2.2797% ( 11) 00:08:52.633 11241.945 - 11292.357: 2.3950% ( 9) 00:08:52.633 11292.357 - 11342.769: 2.5359% ( 11) 00:08:52.633 11342.769 - 11393.182: 2.6767% ( 11) 00:08:52.633 11393.182 - 11443.594: 2.7920% ( 9) 00:08:52.633 11443.594 - 11494.006: 2.9713% ( 14) 00:08:52.633 11494.006 - 11544.418: 3.1762% ( 16) 00:08:52.633 11544.418 - 11594.831: 3.3683% ( 15) 00:08:52.633 11594.831 - 11645.243: 3.5733% ( 16) 00:08:52.633 11645.243 - 11695.655: 3.7782% ( 16) 00:08:52.633 11695.655 - 11746.068: 3.9319% ( 12) 00:08:52.633 11746.068 - 11796.480: 4.1112% ( 14) 00:08:52.633 11796.480 - 11846.892: 4.2777% ( 13) 00:08:52.633 11846.892 - 11897.305: 4.4314% ( 12) 00:08:52.633 11897.305 - 11947.717: 4.6235% ( 15) 00:08:52.633 11947.717 - 11998.129: 4.8284% ( 16) 00:08:52.633 11998.129 - 12048.542: 5.0077% ( 14) 00:08:52.633 12048.542 - 12098.954: 5.1998% ( 15) 00:08:52.633 12098.954 - 12149.366: 5.4047% ( 16) 00:08:52.633 12149.366 - 12199.778: 5.5840% ( 14) 00:08:52.633 12199.778 - 12250.191: 5.7633% ( 14) 00:08:52.633 12250.191 - 12300.603: 5.8914% ( 10) 00:08:52.633 12300.603 - 12351.015: 6.0451% ( 12) 00:08:52.633 12351.015 - 12401.428: 6.2116% ( 13) 00:08:52.633 12401.428 - 12451.840: 6.4037% ( 15) 00:08:52.633 12451.840 - 12502.252: 6.5574% ( 12) 00:08:52.633 12502.252 - 12552.665: 6.7367% ( 14) 00:08:52.633 12552.665 - 12603.077: 6.9544% ( 17) 00:08:52.633 12603.077 - 12653.489: 7.2106% ( 20) 00:08:52.633 12653.489 - 12703.902: 7.4795% ( 21) 00:08:52.633 12703.902 - 12754.314: 7.7100% ( 18) 00:08:52.633 12754.314 - 12804.726: 7.9406% ( 18) 00:08:52.633 12804.726 - 12855.138: 8.2095% ( 21) 00:08:52.633 12855.138 - 12905.551: 8.5425% ( 26) 00:08:52.633 12905.551 - 13006.375: 9.1573% ( 48) 00:08:52.633 13006.375 - 13107.200: 9.8617% ( 55) 00:08:52.633 13107.200 - 13208.025: 10.6301% ( 60) 00:08:52.633 13208.025 - 13308.849: 11.5907% ( 75) 00:08:52.633 13308.849 - 13409.674: 12.6793% ( 85) 00:08:52.633 13409.674 - 13510.498: 13.8576% ( 92) 00:08:52.633 13510.498 - 13611.323: 15.3176% ( 114) 00:08:52.633 13611.323 - 13712.148: 17.0466% ( 135) 00:08:52.633 13712.148 - 13812.972: 18.6347% ( 124) 00:08:52.633 13812.972 - 13913.797: 20.2869% ( 129) 00:08:52.633 13913.797 - 14014.622: 21.8366% ( 121) 00:08:52.633 14014.622 - 14115.446: 23.5400% ( 133) 00:08:52.633 14115.446 - 14216.271: 25.1921% ( 129) 00:08:52.633 14216.271 - 14317.095: 26.8955% ( 133) 00:08:52.633 14317.095 - 14417.920: 28.6117% ( 134) 00:08:52.633 14417.920 - 14518.745: 30.3919% ( 139) 00:08:52.633 14518.745 - 14619.569: 32.1337% ( 136) 00:08:52.633 14619.569 - 14720.394: 33.9780% ( 144) 00:08:52.633 14720.394 - 14821.218: 35.9119% ( 151) 00:08:52.633 14821.218 - 14922.043: 37.6281% ( 134) 00:08:52.633 14922.043 - 15022.868: 39.2418% ( 126) 00:08:52.633 15022.868 - 15123.692: 40.6122% ( 107) 00:08:52.633 15123.692 - 15224.517: 41.9570% ( 105) 00:08:52.633 15224.517 - 15325.342: 43.3017% ( 105) 00:08:52.633 15325.342 - 15426.166: 44.4032% ( 86) 00:08:52.633 15426.166 - 15526.991: 45.3253% ( 72) 00:08:52.633 15526.991 - 15627.815: 46.1706% ( 66) 00:08:52.633 15627.815 - 15728.640: 47.1568% ( 77) 00:08:52.633 15728.640 - 15829.465: 48.2454% ( 85) 00:08:52.633 15829.465 - 15930.289: 49.4365% ( 93) 00:08:52.633 15930.289 - 16031.114: 50.5635% ( 88) 00:08:52.633 16031.114 - 16131.938: 51.8058% ( 97) 00:08:52.633 16131.938 - 16232.763: 52.8176% ( 79) 00:08:52.633 16232.763 - 16333.588: 53.8550% ( 81) 00:08:52.633 16333.588 - 16434.412: 54.9693% ( 87) 00:08:52.633 16434.412 - 16535.237: 55.9939% ( 80) 00:08:52.633 16535.237 - 16636.062: 56.9928% ( 78) 00:08:52.633 16636.062 - 16736.886: 58.1327% ( 89) 00:08:52.633 16736.886 - 16837.711: 59.2853% ( 90) 00:08:52.633 16837.711 - 16938.535: 60.2331% ( 74) 00:08:52.633 16938.535 - 17039.360: 61.2449% ( 79) 00:08:52.633 17039.360 - 17140.185: 62.1158% ( 68) 00:08:52.633 17140.185 - 17241.009: 63.0379% ( 72) 00:08:52.633 17241.009 - 17341.834: 63.9472% ( 71) 00:08:52.633 17341.834 - 17442.658: 64.9206% ( 76) 00:08:52.633 17442.658 - 17543.483: 65.8427% ( 72) 00:08:52.633 17543.483 - 17644.308: 66.8033% ( 75) 00:08:52.633 17644.308 - 17745.132: 67.6358% ( 65) 00:08:52.633 17745.132 - 17845.957: 68.5067% ( 68) 00:08:52.633 17845.957 - 17946.782: 69.3648% ( 67) 00:08:52.633 17946.782 - 18047.606: 70.3253% ( 75) 00:08:52.633 18047.606 - 18148.431: 71.2346% ( 71) 00:08:52.633 18148.431 - 18249.255: 72.1952% ( 75) 00:08:52.633 18249.255 - 18350.080: 73.3607% ( 91) 00:08:52.633 18350.080 - 18450.905: 74.4621% ( 86) 00:08:52.633 18450.905 - 18551.729: 75.7428% ( 100) 00:08:52.633 18551.729 - 18652.554: 77.0364% ( 101) 00:08:52.633 18652.554 - 18753.378: 78.2147% ( 92) 00:08:52.633 18753.378 - 18854.203: 79.3545% ( 89) 00:08:52.633 18854.203 - 18955.028: 80.3535% ( 78) 00:08:52.633 18955.028 - 19055.852: 81.3268% ( 76) 00:08:52.633 19055.852 - 19156.677: 82.3258% ( 78) 00:08:52.633 19156.677 - 19257.502: 83.3120% ( 77) 00:08:52.633 19257.502 - 19358.326: 84.4134% ( 86) 00:08:52.633 19358.326 - 19459.151: 85.3356% ( 72) 00:08:52.633 19459.151 - 19559.975: 86.2065% ( 68) 00:08:52.633 19559.975 - 19660.800: 87.1414% ( 73) 00:08:52.633 19660.800 - 19761.625: 88.0891% ( 74) 00:08:52.633 19761.625 - 19862.449: 88.8704% ( 61) 00:08:52.633 19862.449 - 19963.274: 89.5876% ( 56) 00:08:52.633 19963.274 - 20064.098: 90.2920% ( 55) 00:08:52.633 20064.098 - 20164.923: 90.8043% ( 40) 00:08:52.633 20164.923 - 20265.748: 91.3678% ( 44) 00:08:52.633 20265.748 - 20366.572: 91.9185% ( 43) 00:08:52.633 20366.572 - 20467.397: 92.4821% ( 44) 00:08:52.633 20467.397 - 20568.222: 92.9559% ( 37) 00:08:52.633 20568.222 - 20669.046: 93.4939% ( 42) 00:08:52.633 20669.046 - 20769.871: 93.9677% ( 37) 00:08:52.633 20769.871 - 20870.695: 94.3776% ( 32) 00:08:52.633 20870.695 - 20971.520: 94.7874% ( 32) 00:08:52.633 20971.520 - 21072.345: 95.1972% ( 32) 00:08:52.633 21072.345 - 21173.169: 95.4662% ( 21) 00:08:52.633 21173.169 - 21273.994: 95.6839% ( 17) 00:08:52.633 21273.994 - 21374.818: 95.8760% ( 15) 00:08:52.633 21374.818 - 21475.643: 96.0681% ( 15) 00:08:52.633 21475.643 - 21576.468: 96.2731% ( 16) 00:08:52.633 21576.468 - 21677.292: 96.4524% ( 14) 00:08:52.633 21677.292 - 21778.117: 96.6573% ( 16) 00:08:52.633 21778.117 - 21878.942: 96.8238% ( 13) 00:08:52.633 21878.942 - 21979.766: 97.0159% ( 15) 00:08:52.633 21979.766 - 22080.591: 97.1824% ( 13) 00:08:52.633 22080.591 - 22181.415: 97.3361% ( 12) 00:08:52.633 22181.415 - 22282.240: 97.4769% ( 11) 00:08:52.633 22282.240 - 22383.065: 97.6050% ( 10) 00:08:52.633 22383.065 - 22483.889: 97.7075% ( 8) 00:08:52.633 22483.889 - 22584.714: 97.8099% ( 8) 00:08:52.633 22584.714 - 22685.538: 97.8868% ( 6) 00:08:52.633 22685.538 - 22786.363: 97.9508% ( 5) 00:08:52.633 22786.363 - 22887.188: 98.0277% ( 6) 00:08:52.633 22887.188 - 22988.012: 98.1045% ( 6) 00:08:52.633 22988.012 - 23088.837: 98.1685% ( 5) 00:08:52.634 23088.837 - 23189.662: 98.2454% ( 6) 00:08:52.634 23189.662 - 23290.486: 98.3094% ( 5) 00:08:52.634 23290.486 - 23391.311: 98.3350% ( 2) 00:08:52.634 23391.311 - 23492.135: 98.3735% ( 3) 00:08:52.634 23492.135 - 23592.960: 98.4119% ( 3) 00:08:52.634 23592.960 - 23693.785: 98.4503% ( 3) 00:08:52.634 23693.785 - 23794.609: 98.4887% ( 3) 00:08:52.634 23794.609 - 23895.434: 98.5272% ( 3) 00:08:52.634 23895.434 - 23996.258: 98.6552% ( 10) 00:08:52.634 23996.258 - 24097.083: 98.7193% ( 5) 00:08:52.634 24097.083 - 24197.908: 98.7321% ( 1) 00:08:52.634 24197.908 - 24298.732: 98.7705% ( 3) 00:08:52.634 24298.732 - 24399.557: 98.8217% ( 4) 00:08:52.634 24399.557 - 24500.382: 98.8730% ( 4) 00:08:52.634 24500.382 - 24601.206: 98.9242% ( 4) 00:08:52.634 24601.206 - 24702.031: 99.0138% ( 7) 00:08:52.634 24702.031 - 24802.855: 99.1035% ( 7) 00:08:52.634 24802.855 - 24903.680: 99.2316% ( 10) 00:08:52.634 24903.680 - 25004.505: 99.3212% ( 7) 00:08:52.634 25004.505 - 25105.329: 99.4109% ( 7) 00:08:52.634 25105.329 - 25206.154: 99.4877% ( 6) 00:08:52.634 25206.154 - 25306.978: 99.5389% ( 4) 00:08:52.634 25306.978 - 25407.803: 99.5902% ( 4) 00:08:52.634 25407.803 - 25508.628: 99.6414% ( 4) 00:08:52.634 25508.628 - 25609.452: 99.6926% ( 4) 00:08:52.634 25609.452 - 25710.277: 99.7439% ( 4) 00:08:52.634 25710.277 - 25811.102: 99.7951% ( 4) 00:08:52.634 25811.102 - 26012.751: 99.8975% ( 8) 00:08:52.634 26012.751 - 26214.400: 100.0000% ( 8) 00:08:52.634 00:08:52.634 10:07:55 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:53.611 Initializing NVMe Controllers 00:08:53.611 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.611 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.611 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.611 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.611 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:53.611 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:53.611 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:53.611 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:53.611 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:53.611 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:53.611 Initialization complete. Launching workers. 00:08:53.611 ======================================================== 00:08:53.611 Latency(us) 00:08:53.611 Device Information : IOPS MiB/s Average min max 00:08:53.611 PCIE (0000:00:11.0) NSID 1 from core 0: 8084.47 94.74 15859.71 11429.20 37675.31 00:08:53.611 PCIE (0000:00:13.0) NSID 1 from core 0: 8084.47 94.74 15837.86 11500.33 36476.17 00:08:53.611 PCIE (0000:00:10.0) NSID 1 from core 0: 8084.47 94.74 15811.98 11090.38 35270.30 00:08:53.611 PCIE (0000:00:12.0) NSID 1 from core 0: 8084.47 94.74 15787.18 11273.02 33565.57 00:08:53.611 PCIE (0000:00:12.0) NSID 2 from core 0: 8084.47 94.74 15763.51 9887.61 32287.04 00:08:53.611 PCIE (0000:00:12.0) NSID 3 from core 0: 8148.13 95.49 15616.87 9430.91 25102.35 00:08:53.611 ======================================================== 00:08:53.611 Total : 48570.50 569.19 15779.31 9430.91 37675.31 00:08:53.611 00:08:53.611 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:53.611 ================================================================================= 00:08:53.611 1.00000% : 11947.717us 00:08:53.611 10.00000% : 13107.200us 00:08:53.611 25.00000% : 14014.622us 00:08:53.611 50.00000% : 15426.166us 00:08:53.611 75.00000% : 16938.535us 00:08:53.611 90.00000% : 19055.852us 00:08:53.611 95.00000% : 20467.397us 00:08:53.611 98.00000% : 22383.065us 00:08:53.611 99.00000% : 29037.489us 00:08:53.611 99.50000% : 36498.511us 00:08:53.611 99.90000% : 37506.757us 00:08:53.611 99.99000% : 37708.406us 00:08:53.611 99.99900% : 37708.406us 00:08:53.611 99.99990% : 37708.406us 00:08:53.611 99.99999% : 37708.406us 00:08:53.611 00:08:53.612 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:53.612 ================================================================================= 00:08:53.612 1.00000% : 11947.717us 00:08:53.612 10.00000% : 13107.200us 00:08:53.612 25.00000% : 14115.446us 00:08:53.612 50.00000% : 15325.342us 00:08:53.612 75.00000% : 16837.711us 00:08:53.612 90.00000% : 19156.677us 00:08:53.612 95.00000% : 20467.397us 00:08:53.612 98.00000% : 22181.415us 00:08:53.612 99.00000% : 27827.594us 00:08:53.612 99.50000% : 35288.615us 00:08:53.612 99.90000% : 36296.862us 00:08:53.612 99.99000% : 36498.511us 00:08:53.612 99.99900% : 36498.511us 00:08:53.612 99.99990% : 36498.511us 00:08:53.612 99.99999% : 36498.511us 00:08:53.612 00:08:53.612 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:53.612 ================================================================================= 00:08:53.612 1.00000% : 11594.831us 00:08:53.612 10.00000% : 13006.375us 00:08:53.612 25.00000% : 14115.446us 00:08:53.612 50.00000% : 15224.517us 00:08:53.612 75.00000% : 17140.185us 00:08:53.612 90.00000% : 19257.502us 00:08:53.612 95.00000% : 20467.397us 00:08:53.612 98.00000% : 22080.591us 00:08:53.612 99.00000% : 26416.049us 00:08:53.612 99.50000% : 33877.071us 00:08:53.612 99.90000% : 35086.966us 00:08:53.612 99.99000% : 35288.615us 00:08:53.612 99.99900% : 35288.615us 00:08:53.612 99.99990% : 35288.615us 00:08:53.612 99.99999% : 35288.615us 00:08:53.612 00:08:53.612 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:53.612 ================================================================================= 00:08:53.612 1.00000% : 11594.831us 00:08:53.612 10.00000% : 13006.375us 00:08:53.612 25.00000% : 14115.446us 00:08:53.612 50.00000% : 15224.517us 00:08:53.612 75.00000% : 17241.009us 00:08:53.612 90.00000% : 19156.677us 00:08:53.612 95.00000% : 20366.572us 00:08:53.612 98.00000% : 21878.942us 00:08:53.612 99.00000% : 25004.505us 00:08:53.612 99.50000% : 32465.526us 00:08:53.612 99.90000% : 33473.772us 00:08:53.612 99.99000% : 33675.422us 00:08:53.612 99.99900% : 33675.422us 00:08:53.612 99.99990% : 33675.422us 00:08:53.612 99.99999% : 33675.422us 00:08:53.612 00:08:53.612 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:53.612 ================================================================================= 00:08:53.612 1.00000% : 11746.068us 00:08:53.612 10.00000% : 12804.726us 00:08:53.612 25.00000% : 14014.622us 00:08:53.612 50.00000% : 15224.517us 00:08:53.612 75.00000% : 17039.360us 00:08:53.612 90.00000% : 19358.326us 00:08:53.612 95.00000% : 20669.046us 00:08:53.612 98.00000% : 22383.065us 00:08:53.612 99.00000% : 24500.382us 00:08:53.612 99.50000% : 31255.631us 00:08:53.612 99.90000% : 32263.877us 00:08:53.612 99.99000% : 32465.526us 00:08:53.612 99.99900% : 32465.526us 00:08:53.612 99.99990% : 32465.526us 00:08:53.612 99.99999% : 32465.526us 00:08:53.612 00:08:53.612 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:53.612 ================================================================================= 00:08:53.612 1.00000% : 11746.068us 00:08:53.612 10.00000% : 13006.375us 00:08:53.612 25.00000% : 13913.797us 00:08:53.612 50.00000% : 15325.342us 00:08:53.612 75.00000% : 16938.535us 00:08:53.612 90.00000% : 18854.203us 00:08:53.612 95.00000% : 20467.397us 00:08:53.612 98.00000% : 21677.292us 00:08:53.612 99.00000% : 22483.889us 00:08:53.612 99.50000% : 23895.434us 00:08:53.612 99.90000% : 24903.680us 00:08:53.612 99.99000% : 25105.329us 00:08:53.612 99.99900% : 25105.329us 00:08:53.612 99.99990% : 25105.329us 00:08:53.612 99.99999% : 25105.329us 00:08:53.612 00:08:53.612 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:53.612 ============================================================================== 00:08:53.612 Range in us Cumulative IO count 00:08:53.612 11393.182 - 11443.594: 0.0123% ( 1) 00:08:53.612 11443.594 - 11494.006: 0.0738% ( 5) 00:08:53.612 11494.006 - 11544.418: 0.1230% ( 4) 00:08:53.612 11544.418 - 11594.831: 0.2215% ( 8) 00:08:53.612 11594.831 - 11645.243: 0.2830% ( 5) 00:08:53.612 11645.243 - 11695.655: 0.3445% ( 5) 00:08:53.612 11695.655 - 11746.068: 0.4675% ( 10) 00:08:53.612 11746.068 - 11796.480: 0.6275% ( 13) 00:08:53.612 11796.480 - 11846.892: 0.7997% ( 14) 00:08:53.612 11846.892 - 11897.305: 0.9843% ( 15) 00:08:53.612 11897.305 - 11947.717: 1.1319% ( 12) 00:08:53.612 11947.717 - 11998.129: 1.4026% ( 22) 00:08:53.612 11998.129 - 12048.542: 1.8455% ( 36) 00:08:53.612 12048.542 - 12098.954: 2.3007% ( 37) 00:08:53.612 12098.954 - 12149.366: 2.6329% ( 27) 00:08:53.612 12149.366 - 12199.778: 2.9158% ( 23) 00:08:53.612 12199.778 - 12250.191: 3.1496% ( 19) 00:08:53.612 12250.191 - 12300.603: 3.3588% ( 17) 00:08:53.612 12300.603 - 12351.015: 3.6909% ( 27) 00:08:53.612 12351.015 - 12401.428: 4.0846% ( 32) 00:08:53.612 12401.428 - 12451.840: 4.5522% ( 38) 00:08:53.612 12451.840 - 12502.252: 4.8228% ( 22) 00:08:53.612 12502.252 - 12552.665: 5.2411% ( 34) 00:08:53.612 12552.665 - 12603.077: 5.5856% ( 28) 00:08:53.612 12603.077 - 12653.489: 5.9547% ( 30) 00:08:53.612 12653.489 - 12703.902: 6.4222% ( 38) 00:08:53.612 12703.902 - 12754.314: 6.9144% ( 40) 00:08:53.612 12754.314 - 12804.726: 7.3942% ( 39) 00:08:53.612 12804.726 - 12855.138: 7.8002% ( 33) 00:08:53.612 12855.138 - 12905.551: 8.2185% ( 34) 00:08:53.612 12905.551 - 13006.375: 9.3873% ( 95) 00:08:53.612 13006.375 - 13107.200: 10.9129% ( 124) 00:08:53.612 13107.200 - 13208.025: 12.1801% ( 103) 00:08:53.612 13208.025 - 13308.849: 13.8656% ( 137) 00:08:53.612 13308.849 - 13409.674: 15.3297% ( 119) 00:08:53.612 13409.674 - 13510.498: 16.9168% ( 129) 00:08:53.612 13510.498 - 13611.323: 18.9099% ( 162) 00:08:53.612 13611.323 - 13712.148: 20.5463% ( 133) 00:08:53.612 13712.148 - 13812.972: 22.4902% ( 158) 00:08:53.612 13812.972 - 13913.797: 24.5448% ( 167) 00:08:53.612 13913.797 - 14014.622: 26.6855% ( 174) 00:08:53.612 14014.622 - 14115.446: 28.5433% ( 151) 00:08:53.612 14115.446 - 14216.271: 30.5364% ( 162) 00:08:53.612 14216.271 - 14317.095: 32.3450% ( 147) 00:08:53.612 14317.095 - 14417.920: 34.3627% ( 164) 00:08:53.612 14417.920 - 14518.745: 36.3927% ( 165) 00:08:53.612 14518.745 - 14619.569: 38.1398% ( 142) 00:08:53.612 14619.569 - 14720.394: 39.4562% ( 107) 00:08:53.612 14720.394 - 14821.218: 40.8711% ( 115) 00:08:53.612 14821.218 - 14922.043: 42.4213% ( 126) 00:08:53.612 14922.043 - 15022.868: 44.3036% ( 153) 00:08:53.612 15022.868 - 15123.692: 45.6693% ( 111) 00:08:53.612 15123.692 - 15224.517: 47.2195% ( 126) 00:08:53.612 15224.517 - 15325.342: 48.9296% ( 139) 00:08:53.612 15325.342 - 15426.166: 50.7505% ( 148) 00:08:53.612 15426.166 - 15526.991: 52.5221% ( 144) 00:08:53.612 15526.991 - 15627.815: 54.6383% ( 172) 00:08:53.612 15627.815 - 15728.640: 57.1850% ( 207) 00:08:53.612 15728.640 - 15829.465: 59.1043% ( 156) 00:08:53.612 15829.465 - 15930.289: 60.8391% ( 141) 00:08:53.612 15930.289 - 16031.114: 62.9675% ( 173) 00:08:53.612 16031.114 - 16131.938: 64.7392% ( 144) 00:08:53.612 16131.938 - 16232.763: 66.4001% ( 135) 00:08:53.612 16232.763 - 16333.588: 68.1471% ( 142) 00:08:53.612 16333.588 - 16434.412: 69.6235% ( 120) 00:08:53.612 16434.412 - 16535.237: 71.2106% ( 129) 00:08:53.612 16535.237 - 16636.062: 72.4902% ( 104) 00:08:53.612 16636.062 - 16736.886: 73.6590% ( 95) 00:08:53.612 16736.886 - 16837.711: 74.7539% ( 89) 00:08:53.612 16837.711 - 16938.535: 75.6767% ( 75) 00:08:53.612 16938.535 - 17039.360: 76.5010% ( 67) 00:08:53.612 17039.360 - 17140.185: 77.1654% ( 54) 00:08:53.612 17140.185 - 17241.009: 78.0020% ( 68) 00:08:53.612 17241.009 - 17341.834: 78.7402% ( 60) 00:08:53.612 17341.834 - 17442.658: 79.3307% ( 48) 00:08:53.612 17442.658 - 17543.483: 80.0074% ( 55) 00:08:53.612 17543.483 - 17644.308: 80.9424% ( 76) 00:08:53.612 17644.308 - 17745.132: 81.7298% ( 64) 00:08:53.612 17745.132 - 17845.957: 82.2958% ( 46) 00:08:53.612 17845.957 - 17946.782: 83.0832% ( 64) 00:08:53.612 17946.782 - 18047.606: 83.8460% ( 62) 00:08:53.612 18047.606 - 18148.431: 84.5349% ( 56) 00:08:53.612 18148.431 - 18249.255: 85.3100% ( 63) 00:08:53.612 18249.255 - 18350.080: 85.9990% ( 56) 00:08:53.612 18350.080 - 18450.905: 86.6019% ( 49) 00:08:53.612 18450.905 - 18551.729: 87.1063% ( 41) 00:08:53.612 18551.729 - 18652.554: 87.7584% ( 53) 00:08:53.612 18652.554 - 18753.378: 88.4596% ( 57) 00:08:53.612 18753.378 - 18854.203: 89.0994% ( 52) 00:08:53.612 18854.203 - 18955.028: 89.7515% ( 53) 00:08:53.612 18955.028 - 19055.852: 90.2436% ( 40) 00:08:53.612 19055.852 - 19156.677: 90.7972% ( 45) 00:08:53.612 19156.677 - 19257.502: 91.3509% ( 45) 00:08:53.612 19257.502 - 19358.326: 91.8430% ( 40) 00:08:53.612 19358.326 - 19459.151: 92.4213% ( 47) 00:08:53.612 19459.151 - 19559.975: 92.8150% ( 32) 00:08:53.612 19559.975 - 19660.800: 93.1718% ( 29) 00:08:53.612 19660.800 - 19761.625: 93.4793% ( 25) 00:08:53.612 19761.625 - 19862.449: 93.7869% ( 25) 00:08:53.612 19862.449 - 19963.274: 94.0576% ( 22) 00:08:53.612 19963.274 - 20064.098: 94.3159% ( 21) 00:08:53.612 20064.098 - 20164.923: 94.5497% ( 19) 00:08:53.612 20164.923 - 20265.748: 94.7219% ( 14) 00:08:53.612 20265.748 - 20366.572: 94.8573% ( 11) 00:08:53.612 20366.572 - 20467.397: 95.0049% ( 12) 00:08:53.612 20467.397 - 20568.222: 95.1156% ( 9) 00:08:53.612 20568.222 - 20669.046: 95.2018% ( 7) 00:08:53.612 20669.046 - 20769.871: 95.2879% ( 7) 00:08:53.612 20769.871 - 20870.695: 95.3740% ( 7) 00:08:53.612 20870.695 - 20971.520: 95.4724% ( 8) 00:08:53.612 20971.520 - 21072.345: 95.6447% ( 14) 00:08:53.612 21072.345 - 21173.169: 95.8046% ( 13) 00:08:53.612 21173.169 - 21273.994: 95.9523% ( 12) 00:08:53.612 21273.994 - 21374.818: 96.1491% ( 16) 00:08:53.612 21374.818 - 21475.643: 96.3337% ( 15) 00:08:53.612 21475.643 - 21576.468: 96.5305% ( 16) 00:08:53.612 21576.468 - 21677.292: 96.7028% ( 14) 00:08:53.612 21677.292 - 21778.117: 96.8996% ( 16) 00:08:53.612 21778.117 - 21878.942: 97.1088% ( 17) 00:08:53.612 21878.942 - 21979.766: 97.3179% ( 17) 00:08:53.612 21979.766 - 22080.591: 97.5763% ( 21) 00:08:53.612 22080.591 - 22181.415: 97.7854% ( 17) 00:08:53.612 22181.415 - 22282.240: 97.9454% ( 13) 00:08:53.612 22282.240 - 22383.065: 98.0684% ( 10) 00:08:53.612 22383.065 - 22483.889: 98.1668% ( 8) 00:08:53.613 22483.889 - 22584.714: 98.2037% ( 3) 00:08:53.613 22584.714 - 22685.538: 98.2406% ( 3) 00:08:53.613 22685.538 - 22786.363: 98.2899% ( 4) 00:08:53.613 22786.363 - 22887.188: 98.3268% ( 3) 00:08:53.613 22887.188 - 22988.012: 98.3637% ( 3) 00:08:53.613 22988.012 - 23088.837: 98.4006% ( 3) 00:08:53.613 23088.837 - 23189.662: 98.4252% ( 2) 00:08:53.613 28029.243 - 28230.892: 98.5359% ( 9) 00:08:53.613 28230.892 - 28432.542: 98.6959% ( 13) 00:08:53.613 28432.542 - 28634.191: 98.7943% ( 8) 00:08:53.613 28634.191 - 28835.840: 98.9050% ( 9) 00:08:53.613 28835.840 - 29037.489: 99.0034% ( 8) 00:08:53.613 29037.489 - 29239.138: 99.1019% ( 8) 00:08:53.613 29239.138 - 29440.788: 99.2003% ( 8) 00:08:53.613 29440.788 - 29642.437: 99.2126% ( 1) 00:08:53.613 34885.317 - 35086.966: 99.2249% ( 1) 00:08:53.613 35288.615 - 35490.265: 99.2495% ( 2) 00:08:53.613 35490.265 - 35691.914: 99.3971% ( 12) 00:08:53.613 35893.563 - 36095.212: 99.4094% ( 1) 00:08:53.613 36095.212 - 36296.862: 99.4710% ( 5) 00:08:53.613 36296.862 - 36498.511: 99.5448% ( 6) 00:08:53.613 36498.511 - 36700.160: 99.6186% ( 6) 00:08:53.613 36700.160 - 36901.809: 99.6924% ( 6) 00:08:53.613 36901.809 - 37103.458: 99.7662% ( 6) 00:08:53.613 37103.458 - 37305.108: 99.8401% ( 6) 00:08:53.613 37305.108 - 37506.757: 99.9016% ( 5) 00:08:53.613 37506.757 - 37708.406: 100.0000% ( 8) 00:08:53.613 00:08:53.613 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:53.613 ============================================================================== 00:08:53.613 Range in us Cumulative IO count 00:08:53.613 11494.006 - 11544.418: 0.0369% ( 3) 00:08:53.613 11544.418 - 11594.831: 0.0861% ( 4) 00:08:53.613 11594.831 - 11645.243: 0.1722% ( 7) 00:08:53.613 11645.243 - 11695.655: 0.2707% ( 8) 00:08:53.613 11695.655 - 11746.068: 0.4060% ( 11) 00:08:53.613 11746.068 - 11796.480: 0.5044% ( 8) 00:08:53.613 11796.480 - 11846.892: 0.6398% ( 11) 00:08:53.613 11846.892 - 11897.305: 0.7751% ( 11) 00:08:53.613 11897.305 - 11947.717: 1.0089% ( 19) 00:08:53.613 11947.717 - 11998.129: 1.2918% ( 23) 00:08:53.613 11998.129 - 12048.542: 1.7101% ( 34) 00:08:53.613 12048.542 - 12098.954: 2.1284% ( 34) 00:08:53.613 12098.954 - 12149.366: 2.5837% ( 37) 00:08:53.613 12149.366 - 12199.778: 2.9035% ( 26) 00:08:53.613 12199.778 - 12250.191: 3.3219% ( 34) 00:08:53.613 12250.191 - 12300.603: 3.7525% ( 35) 00:08:53.613 12300.603 - 12351.015: 4.0846% ( 27) 00:08:53.613 12351.015 - 12401.428: 4.6137% ( 43) 00:08:53.613 12401.428 - 12451.840: 5.1058% ( 40) 00:08:53.613 12451.840 - 12502.252: 5.3888% ( 23) 00:08:53.613 12502.252 - 12552.665: 5.7087% ( 26) 00:08:53.613 12552.665 - 12603.077: 5.9793% ( 22) 00:08:53.613 12603.077 - 12653.489: 6.1639% ( 15) 00:08:53.613 12653.489 - 12703.902: 6.5084% ( 28) 00:08:53.613 12703.902 - 12754.314: 6.9636% ( 37) 00:08:53.613 12754.314 - 12804.726: 7.3327% ( 30) 00:08:53.613 12804.726 - 12855.138: 7.8371% ( 41) 00:08:53.613 12855.138 - 12905.551: 8.4646% ( 51) 00:08:53.613 12905.551 - 13006.375: 9.6211% ( 94) 00:08:53.613 13006.375 - 13107.200: 10.5930% ( 79) 00:08:53.613 13107.200 - 13208.025: 11.8356% ( 101) 00:08:53.613 13208.025 - 13308.849: 13.0536% ( 99) 00:08:53.613 13308.849 - 13409.674: 14.3209% ( 103) 00:08:53.613 13409.674 - 13510.498: 15.5020% ( 96) 00:08:53.613 13510.498 - 13611.323: 16.6708% ( 95) 00:08:53.613 13611.323 - 13712.148: 17.9749% ( 106) 00:08:53.613 13712.148 - 13812.972: 19.8327% ( 151) 00:08:53.613 13812.972 - 13913.797: 21.8996% ( 168) 00:08:53.613 13913.797 - 14014.622: 24.6063% ( 220) 00:08:53.613 14014.622 - 14115.446: 26.8578% ( 183) 00:08:53.613 14115.446 - 14216.271: 29.1954% ( 190) 00:08:53.613 14216.271 - 14317.095: 31.3115% ( 172) 00:08:53.613 14317.095 - 14417.920: 33.1324% ( 148) 00:08:53.613 14417.920 - 14518.745: 35.3223% ( 178) 00:08:53.613 14518.745 - 14619.569: 37.5000% ( 177) 00:08:53.613 14619.569 - 14720.394: 39.6161% ( 172) 00:08:53.613 14720.394 - 14821.218: 41.6093% ( 162) 00:08:53.613 14821.218 - 14922.043: 43.5778% ( 160) 00:08:53.613 14922.043 - 15022.868: 45.6570% ( 169) 00:08:53.613 15022.868 - 15123.692: 47.4286% ( 144) 00:08:53.613 15123.692 - 15224.517: 49.4587% ( 165) 00:08:53.613 15224.517 - 15325.342: 51.7101% ( 183) 00:08:53.613 15325.342 - 15426.166: 53.4203% ( 139) 00:08:53.613 15426.166 - 15526.991: 55.1550% ( 141) 00:08:53.613 15526.991 - 15627.815: 56.9267% ( 144) 00:08:53.613 15627.815 - 15728.640: 59.0674% ( 174) 00:08:53.613 15728.640 - 15829.465: 60.7653% ( 138) 00:08:53.613 15829.465 - 15930.289: 62.3647% ( 130) 00:08:53.613 15930.289 - 16031.114: 63.9518% ( 129) 00:08:53.613 16031.114 - 16131.938: 65.7603% ( 147) 00:08:53.613 16131.938 - 16232.763: 67.5074% ( 142) 00:08:53.613 16232.763 - 16333.588: 69.0576% ( 126) 00:08:53.613 16333.588 - 16434.412: 70.7923% ( 141) 00:08:53.613 16434.412 - 16535.237: 72.6009% ( 147) 00:08:53.613 16535.237 - 16636.062: 73.8312% ( 100) 00:08:53.613 16636.062 - 16736.886: 74.8278% ( 81) 00:08:53.613 16736.886 - 16837.711: 75.7628% ( 76) 00:08:53.613 16837.711 - 16938.535: 76.5379% ( 63) 00:08:53.613 16938.535 - 17039.360: 77.1407% ( 49) 00:08:53.613 17039.360 - 17140.185: 77.6821% ( 44) 00:08:53.613 17140.185 - 17241.009: 78.3095% ( 51) 00:08:53.613 17241.009 - 17341.834: 78.9370% ( 51) 00:08:53.613 17341.834 - 17442.658: 79.3184% ( 31) 00:08:53.613 17442.658 - 17543.483: 79.7490% ( 35) 00:08:53.613 17543.483 - 17644.308: 80.1673% ( 34) 00:08:53.613 17644.308 - 17745.132: 80.5610% ( 32) 00:08:53.613 17745.132 - 17845.957: 81.0901% ( 43) 00:08:53.613 17845.957 - 17946.782: 81.8529% ( 62) 00:08:53.613 17946.782 - 18047.606: 82.4926% ( 52) 00:08:53.613 18047.606 - 18148.431: 83.1447% ( 53) 00:08:53.613 18148.431 - 18249.255: 83.8706% ( 59) 00:08:53.613 18249.255 - 18350.080: 84.4857% ( 50) 00:08:53.613 18350.080 - 18450.905: 85.0763% ( 48) 00:08:53.613 18450.905 - 18551.729: 85.6791% ( 49) 00:08:53.613 18551.729 - 18652.554: 86.4050% ( 59) 00:08:53.613 18652.554 - 18753.378: 87.3278% ( 75) 00:08:53.613 18753.378 - 18854.203: 88.0536% ( 59) 00:08:53.613 18854.203 - 18955.028: 88.8656% ( 66) 00:08:53.613 18955.028 - 19055.852: 89.7023% ( 68) 00:08:53.613 19055.852 - 19156.677: 90.3297% ( 51) 00:08:53.613 19156.677 - 19257.502: 91.0802% ( 61) 00:08:53.613 19257.502 - 19358.326: 91.9906% ( 74) 00:08:53.613 19358.326 - 19459.151: 92.5197% ( 43) 00:08:53.613 19459.151 - 19559.975: 92.9503% ( 35) 00:08:53.613 19559.975 - 19660.800: 93.3809% ( 35) 00:08:53.613 19660.800 - 19761.625: 93.7008% ( 26) 00:08:53.613 19761.625 - 19862.449: 94.0207% ( 26) 00:08:53.613 19862.449 - 19963.274: 94.3282% ( 25) 00:08:53.613 19963.274 - 20064.098: 94.5743% ( 20) 00:08:53.613 20064.098 - 20164.923: 94.7219% ( 12) 00:08:53.613 20164.923 - 20265.748: 94.8696% ( 12) 00:08:53.613 20265.748 - 20366.572: 94.9926% ( 10) 00:08:53.613 20366.572 - 20467.397: 95.1156% ( 10) 00:08:53.613 20467.397 - 20568.222: 95.2264% ( 9) 00:08:53.613 20568.222 - 20669.046: 95.3617% ( 11) 00:08:53.613 20669.046 - 20769.871: 95.4847% ( 10) 00:08:53.613 20769.871 - 20870.695: 95.6447% ( 13) 00:08:53.613 20870.695 - 20971.520: 95.7677% ( 10) 00:08:53.613 20971.520 - 21072.345: 95.9277% ( 13) 00:08:53.613 21072.345 - 21173.169: 96.0753% ( 12) 00:08:53.613 21173.169 - 21273.994: 96.2229% ( 12) 00:08:53.613 21273.994 - 21374.818: 96.3706% ( 12) 00:08:53.613 21374.818 - 21475.643: 96.5059% ( 11) 00:08:53.613 21475.643 - 21576.468: 96.6905% ( 15) 00:08:53.613 21576.468 - 21677.292: 96.8381% ( 12) 00:08:53.613 21677.292 - 21778.117: 97.0842% ( 20) 00:08:53.613 21778.117 - 21878.942: 97.3302% ( 20) 00:08:53.613 21878.942 - 21979.766: 97.5886% ( 21) 00:08:53.613 21979.766 - 22080.591: 97.8100% ( 18) 00:08:53.613 22080.591 - 22181.415: 98.0561% ( 20) 00:08:53.613 22181.415 - 22282.240: 98.2160% ( 13) 00:08:53.613 22282.240 - 22383.065: 98.3145% ( 8) 00:08:53.613 22383.065 - 22483.889: 98.3883% ( 6) 00:08:53.613 22483.889 - 22584.714: 98.4252% ( 3) 00:08:53.613 26416.049 - 26617.698: 98.4867% ( 5) 00:08:53.613 26617.698 - 26819.348: 98.5851% ( 8) 00:08:53.613 26819.348 - 27020.997: 98.6836% ( 8) 00:08:53.613 27020.997 - 27222.646: 98.7943% ( 9) 00:08:53.613 27222.646 - 27424.295: 98.8927% ( 8) 00:08:53.613 27424.295 - 27625.945: 98.9911% ( 8) 00:08:53.613 27625.945 - 27827.594: 99.0896% ( 8) 00:08:53.613 27827.594 - 28029.243: 99.2003% ( 9) 00:08:53.613 28029.243 - 28230.892: 99.2126% ( 1) 00:08:53.613 34482.018 - 34683.668: 99.2741% ( 5) 00:08:53.613 34683.668 - 34885.317: 99.3479% ( 6) 00:08:53.613 34885.317 - 35086.966: 99.4341% ( 7) 00:08:53.613 35086.966 - 35288.615: 99.5202% ( 7) 00:08:53.613 35288.615 - 35490.265: 99.5940% ( 6) 00:08:53.613 35490.265 - 35691.914: 99.6678% ( 6) 00:08:53.613 35691.914 - 35893.563: 99.7416% ( 6) 00:08:53.613 35893.563 - 36095.212: 99.8278% ( 7) 00:08:53.613 36095.212 - 36296.862: 99.9139% ( 7) 00:08:53.613 36296.862 - 36498.511: 100.0000% ( 7) 00:08:53.613 00:08:53.613 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:53.613 ============================================================================== 00:08:53.613 Range in us Cumulative IO count 00:08:53.613 11040.295 - 11090.708: 0.0123% ( 1) 00:08:53.613 11090.708 - 11141.120: 0.0984% ( 7) 00:08:53.613 11141.120 - 11191.532: 0.1969% ( 8) 00:08:53.613 11191.532 - 11241.945: 0.2830% ( 7) 00:08:53.613 11241.945 - 11292.357: 0.4183% ( 11) 00:08:53.613 11292.357 - 11342.769: 0.5044% ( 7) 00:08:53.613 11342.769 - 11393.182: 0.6152% ( 9) 00:08:53.613 11393.182 - 11443.594: 0.7136% ( 8) 00:08:53.613 11443.594 - 11494.006: 0.8366% ( 10) 00:08:53.613 11494.006 - 11544.418: 0.9843% ( 12) 00:08:53.613 11544.418 - 11594.831: 1.0458% ( 5) 00:08:53.613 11594.831 - 11645.243: 1.2180% ( 14) 00:08:53.613 11645.243 - 11695.655: 1.3287% ( 9) 00:08:53.613 11695.655 - 11746.068: 1.6486% ( 26) 00:08:53.613 11746.068 - 11796.480: 1.9562% ( 25) 00:08:53.613 11796.480 - 11846.892: 2.2023% ( 20) 00:08:53.613 11846.892 - 11897.305: 2.5837% ( 31) 00:08:53.613 11897.305 - 11947.717: 2.9774% ( 32) 00:08:53.613 11947.717 - 11998.129: 3.2849% ( 25) 00:08:53.613 11998.129 - 12048.542: 3.6294% ( 28) 00:08:53.613 12048.542 - 12098.954: 3.8263% ( 16) 00:08:53.613 12098.954 - 12149.366: 4.0231% ( 16) 00:08:53.614 12149.366 - 12199.778: 4.2569% ( 19) 00:08:53.614 12199.778 - 12250.191: 4.5276% ( 22) 00:08:53.614 12250.191 - 12300.603: 4.8351% ( 25) 00:08:53.614 12300.603 - 12351.015: 5.3888% ( 45) 00:08:53.614 12351.015 - 12401.428: 5.6348% ( 20) 00:08:53.614 12401.428 - 12451.840: 5.9670% ( 27) 00:08:53.614 12451.840 - 12502.252: 6.3115% ( 28) 00:08:53.614 12502.252 - 12552.665: 6.7175% ( 33) 00:08:53.614 12552.665 - 12603.077: 7.1973% ( 39) 00:08:53.614 12603.077 - 12653.489: 7.5787% ( 31) 00:08:53.614 12653.489 - 12703.902: 7.9232% ( 28) 00:08:53.614 12703.902 - 12754.314: 8.2800% ( 29) 00:08:53.614 12754.314 - 12804.726: 8.7106% ( 35) 00:08:53.614 12804.726 - 12855.138: 9.1535% ( 36) 00:08:53.614 12855.138 - 12905.551: 9.5349% ( 31) 00:08:53.614 12905.551 - 13006.375: 10.4208% ( 72) 00:08:53.614 13006.375 - 13107.200: 11.5896% ( 95) 00:08:53.614 13107.200 - 13208.025: 12.5123% ( 75) 00:08:53.614 13208.025 - 13308.849: 13.6442% ( 92) 00:08:53.614 13308.849 - 13409.674: 14.6654% ( 83) 00:08:53.614 13409.674 - 13510.498: 15.6988% ( 84) 00:08:53.614 13510.498 - 13611.323: 17.0891% ( 113) 00:08:53.614 13611.323 - 13712.148: 18.5778% ( 121) 00:08:53.614 13712.148 - 13812.972: 19.9065% ( 108) 00:08:53.614 13812.972 - 13913.797: 21.6905% ( 145) 00:08:53.614 13913.797 - 14014.622: 23.5728% ( 153) 00:08:53.614 14014.622 - 14115.446: 25.6521% ( 169) 00:08:53.614 14115.446 - 14216.271: 27.7928% ( 174) 00:08:53.614 14216.271 - 14317.095: 30.1058% ( 188) 00:08:53.614 14317.095 - 14417.920: 32.4434% ( 190) 00:08:53.614 14417.920 - 14518.745: 34.6211% ( 177) 00:08:53.614 14518.745 - 14619.569: 36.6880% ( 168) 00:08:53.614 14619.569 - 14720.394: 38.6688% ( 161) 00:08:53.614 14720.394 - 14821.218: 40.8219% ( 175) 00:08:53.614 14821.218 - 14922.043: 43.1348% ( 188) 00:08:53.614 14922.043 - 15022.868: 45.8046% ( 217) 00:08:53.614 15022.868 - 15123.692: 48.0438% ( 182) 00:08:53.614 15123.692 - 15224.517: 50.2830% ( 182) 00:08:53.614 15224.517 - 15325.342: 52.3376% ( 167) 00:08:53.614 15325.342 - 15426.166: 54.3184% ( 161) 00:08:53.614 15426.166 - 15526.991: 56.3484% ( 165) 00:08:53.614 15526.991 - 15627.815: 58.0955% ( 142) 00:08:53.614 15627.815 - 15728.640: 60.1378% ( 166) 00:08:53.614 15728.640 - 15829.465: 62.2662% ( 173) 00:08:53.614 15829.465 - 15930.289: 64.0133% ( 142) 00:08:53.614 15930.289 - 16031.114: 65.6496% ( 133) 00:08:53.614 16031.114 - 16131.938: 66.9660% ( 107) 00:08:53.614 16131.938 - 16232.763: 68.1348% ( 95) 00:08:53.614 16232.763 - 16333.588: 69.3159% ( 96) 00:08:53.614 16333.588 - 16434.412: 69.9188% ( 49) 00:08:53.614 16434.412 - 16535.237: 70.9400% ( 83) 00:08:53.614 16535.237 - 16636.062: 71.6658% ( 59) 00:08:53.614 16636.062 - 16736.886: 72.5640% ( 73) 00:08:53.614 16736.886 - 16837.711: 73.5728% ( 82) 00:08:53.614 16837.711 - 16938.535: 74.2618% ( 56) 00:08:53.614 16938.535 - 17039.360: 74.8770% ( 50) 00:08:53.614 17039.360 - 17140.185: 75.6644% ( 64) 00:08:53.614 17140.185 - 17241.009: 76.3780% ( 58) 00:08:53.614 17241.009 - 17341.834: 77.1531% ( 63) 00:08:53.614 17341.834 - 17442.658: 77.9897% ( 68) 00:08:53.614 17442.658 - 17543.483: 78.9001% ( 74) 00:08:53.614 17543.483 - 17644.308: 79.6875% ( 64) 00:08:53.614 17644.308 - 17745.132: 80.3765% ( 56) 00:08:53.614 17745.132 - 17845.957: 81.0778% ( 57) 00:08:53.614 17845.957 - 17946.782: 81.6806% ( 49) 00:08:53.614 17946.782 - 18047.606: 82.3081% ( 51) 00:08:53.614 18047.606 - 18148.431: 83.0709% ( 62) 00:08:53.614 18148.431 - 18249.255: 83.7352% ( 54) 00:08:53.614 18249.255 - 18350.080: 84.4611% ( 59) 00:08:53.614 18350.080 - 18450.905: 85.1624% ( 57) 00:08:53.614 18450.905 - 18551.729: 85.8760% ( 58) 00:08:53.614 18551.729 - 18652.554: 86.6511% ( 63) 00:08:53.614 18652.554 - 18753.378: 87.4016% ( 61) 00:08:53.614 18753.378 - 18854.203: 88.0413% ( 52) 00:08:53.614 18854.203 - 18955.028: 88.6196% ( 47) 00:08:53.614 18955.028 - 19055.852: 89.4070% ( 64) 00:08:53.614 19055.852 - 19156.677: 89.9360% ( 43) 00:08:53.614 19156.677 - 19257.502: 90.3789% ( 36) 00:08:53.614 19257.502 - 19358.326: 90.8588% ( 39) 00:08:53.614 19358.326 - 19459.151: 91.3140% ( 37) 00:08:53.614 19459.151 - 19559.975: 91.8061% ( 40) 00:08:53.614 19559.975 - 19660.800: 92.1629% ( 29) 00:08:53.614 19660.800 - 19761.625: 92.6058% ( 36) 00:08:53.614 19761.625 - 19862.449: 93.0979% ( 40) 00:08:53.614 19862.449 - 19963.274: 93.4670% ( 30) 00:08:53.614 19963.274 - 20064.098: 93.8976% ( 35) 00:08:53.614 20064.098 - 20164.923: 94.2052% ( 25) 00:08:53.614 20164.923 - 20265.748: 94.4636% ( 21) 00:08:53.614 20265.748 - 20366.572: 94.7343% ( 22) 00:08:53.614 20366.572 - 20467.397: 95.0049% ( 22) 00:08:53.614 20467.397 - 20568.222: 95.3002% ( 24) 00:08:53.614 20568.222 - 20669.046: 95.4847% ( 15) 00:08:53.614 20669.046 - 20769.871: 95.7308% ( 20) 00:08:53.614 20769.871 - 20870.695: 96.0138% ( 23) 00:08:53.614 20870.695 - 20971.520: 96.2475% ( 19) 00:08:53.614 20971.520 - 21072.345: 96.5059% ( 21) 00:08:53.614 21072.345 - 21173.169: 96.6289% ( 10) 00:08:53.614 21173.169 - 21273.994: 96.8627% ( 19) 00:08:53.614 21273.994 - 21374.818: 96.9980% ( 11) 00:08:53.614 21374.818 - 21475.643: 97.2564% ( 21) 00:08:53.614 21475.643 - 21576.468: 97.3917% ( 11) 00:08:53.614 21576.468 - 21677.292: 97.5640% ( 14) 00:08:53.614 21677.292 - 21778.117: 97.6993% ( 11) 00:08:53.614 21778.117 - 21878.942: 97.8593% ( 13) 00:08:53.614 21878.942 - 21979.766: 97.9823% ( 10) 00:08:53.614 21979.766 - 22080.591: 98.1176% ( 11) 00:08:53.614 22080.591 - 22181.415: 98.2037% ( 7) 00:08:53.614 22181.415 - 22282.240: 98.2406% ( 3) 00:08:53.614 22282.240 - 22383.065: 98.2899% ( 4) 00:08:53.614 22383.065 - 22483.889: 98.3145% ( 2) 00:08:53.614 22483.889 - 22584.714: 98.3514% ( 3) 00:08:53.614 22584.714 - 22685.538: 98.4006% ( 4) 00:08:53.614 22685.538 - 22786.363: 98.4252% ( 2) 00:08:53.614 24702.031 - 24802.855: 98.4375% ( 1) 00:08:53.614 24802.855 - 24903.680: 98.4498% ( 1) 00:08:53.614 24903.680 - 25004.505: 98.4990% ( 4) 00:08:53.614 25004.505 - 25105.329: 98.5482% ( 4) 00:08:53.614 25105.329 - 25206.154: 98.5851% ( 3) 00:08:53.614 25206.154 - 25306.978: 98.6097% ( 2) 00:08:53.614 25306.978 - 25407.803: 98.6344% ( 2) 00:08:53.614 25407.803 - 25508.628: 98.6959% ( 5) 00:08:53.614 25508.628 - 25609.452: 98.7205% ( 2) 00:08:53.614 25609.452 - 25710.277: 98.7697% ( 4) 00:08:53.614 25710.277 - 25811.102: 98.8066% ( 3) 00:08:53.614 25811.102 - 26012.751: 98.8927% ( 7) 00:08:53.614 26012.751 - 26214.400: 98.9542% ( 5) 00:08:53.614 26214.400 - 26416.049: 99.0404% ( 7) 00:08:53.614 26416.049 - 26617.698: 99.1142% ( 6) 00:08:53.614 26617.698 - 26819.348: 99.1880% ( 6) 00:08:53.614 26819.348 - 27020.997: 99.2126% ( 2) 00:08:53.614 32868.825 - 33070.474: 99.2372% ( 2) 00:08:53.614 33070.474 - 33272.123: 99.2987% ( 5) 00:08:53.614 33272.123 - 33473.772: 99.3725% ( 6) 00:08:53.614 33473.772 - 33675.422: 99.4587% ( 7) 00:08:53.614 33675.422 - 33877.071: 99.5325% ( 6) 00:08:53.614 33877.071 - 34078.720: 99.5817% ( 4) 00:08:53.614 34078.720 - 34280.369: 99.6432% ( 5) 00:08:53.614 34280.369 - 34482.018: 99.7293% ( 7) 00:08:53.614 34482.018 - 34683.668: 99.7908% ( 5) 00:08:53.614 34683.668 - 34885.317: 99.8647% ( 6) 00:08:53.614 34885.317 - 35086.966: 99.9262% ( 5) 00:08:53.614 35086.966 - 35288.615: 100.0000% ( 6) 00:08:53.614 00:08:53.614 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:53.614 ============================================================================== 00:08:53.614 Range in us Cumulative IO count 00:08:53.614 11241.945 - 11292.357: 0.0246% ( 2) 00:08:53.614 11292.357 - 11342.769: 0.0492% ( 2) 00:08:53.614 11342.769 - 11393.182: 0.1107% ( 5) 00:08:53.614 11393.182 - 11443.594: 0.2461% ( 11) 00:08:53.614 11443.594 - 11494.006: 0.5167% ( 22) 00:08:53.614 11494.006 - 11544.418: 0.7751% ( 21) 00:08:53.614 11544.418 - 11594.831: 1.0827% ( 25) 00:08:53.614 11594.831 - 11645.243: 1.3041% ( 18) 00:08:53.614 11645.243 - 11695.655: 1.5625% ( 21) 00:08:53.614 11695.655 - 11746.068: 1.7840% ( 18) 00:08:53.614 11746.068 - 11796.480: 2.1161% ( 27) 00:08:53.614 11796.480 - 11846.892: 2.4360% ( 26) 00:08:53.614 11846.892 - 11897.305: 2.6944% ( 21) 00:08:53.614 11897.305 - 11947.717: 2.9035% ( 17) 00:08:53.614 11947.717 - 11998.129: 3.1250% ( 18) 00:08:53.614 11998.129 - 12048.542: 3.4818% ( 29) 00:08:53.614 12048.542 - 12098.954: 3.8509% ( 30) 00:08:53.614 12098.954 - 12149.366: 4.1462% ( 24) 00:08:53.614 12149.366 - 12199.778: 4.4291% ( 23) 00:08:53.614 12199.778 - 12250.191: 4.7367% ( 25) 00:08:53.614 12250.191 - 12300.603: 4.9090% ( 14) 00:08:53.614 12300.603 - 12351.015: 5.1427% ( 19) 00:08:53.614 12351.015 - 12401.428: 5.3888% ( 20) 00:08:53.614 12401.428 - 12451.840: 5.6102% ( 18) 00:08:53.614 12451.840 - 12502.252: 5.8932% ( 23) 00:08:53.614 12502.252 - 12552.665: 6.2500% ( 29) 00:08:53.614 12552.665 - 12603.077: 6.7913% ( 44) 00:08:53.614 12603.077 - 12653.489: 7.3081% ( 42) 00:08:53.614 12653.489 - 12703.902: 7.7879% ( 39) 00:08:53.614 12703.902 - 12754.314: 8.2677% ( 39) 00:08:53.614 12754.314 - 12804.726: 8.6245% ( 29) 00:08:53.614 12804.726 - 12855.138: 9.0920% ( 38) 00:08:53.614 12855.138 - 12905.551: 9.5226% ( 35) 00:08:53.614 12905.551 - 13006.375: 10.4085% ( 72) 00:08:53.614 13006.375 - 13107.200: 11.5281% ( 91) 00:08:53.614 13107.200 - 13208.025: 12.6969% ( 95) 00:08:53.614 13208.025 - 13308.849: 13.7549% ( 86) 00:08:53.614 13308.849 - 13409.674: 14.7638% ( 82) 00:08:53.614 13409.674 - 13510.498: 16.1540% ( 113) 00:08:53.614 13510.498 - 13611.323: 17.3474% ( 97) 00:08:53.614 13611.323 - 13712.148: 18.5285% ( 96) 00:08:53.614 13712.148 - 13812.972: 19.6727% ( 93) 00:08:53.614 13812.972 - 13913.797: 21.3091% ( 133) 00:08:53.614 13913.797 - 14014.622: 23.1791% ( 152) 00:08:53.614 14014.622 - 14115.446: 25.4060% ( 181) 00:08:53.614 14115.446 - 14216.271: 27.7805% ( 193) 00:08:53.614 14216.271 - 14317.095: 30.5733% ( 227) 00:08:53.614 14317.095 - 14417.920: 32.6033% ( 165) 00:08:53.614 14417.920 - 14518.745: 35.1132% ( 204) 00:08:53.614 14518.745 - 14619.569: 37.2785% ( 176) 00:08:53.614 14619.569 - 14720.394: 39.4931% ( 180) 00:08:53.614 14720.394 - 14821.218: 42.3844% ( 235) 00:08:53.615 14821.218 - 14922.043: 44.7466% ( 192) 00:08:53.615 14922.043 - 15022.868: 46.6166% ( 152) 00:08:53.615 15022.868 - 15123.692: 48.6467% ( 165) 00:08:53.615 15123.692 - 15224.517: 50.7628% ( 172) 00:08:53.615 15224.517 - 15325.342: 53.0758% ( 188) 00:08:53.615 15325.342 - 15426.166: 55.3642% ( 186) 00:08:53.615 15426.166 - 15526.991: 57.3942% ( 165) 00:08:53.615 15526.991 - 15627.815: 59.2520% ( 151) 00:08:53.615 15627.815 - 15728.640: 60.9744% ( 140) 00:08:53.615 15728.640 - 15829.465: 62.7338% ( 143) 00:08:53.615 15829.465 - 15930.289: 64.1732% ( 117) 00:08:53.615 15930.289 - 16031.114: 65.4281% ( 102) 00:08:53.615 16031.114 - 16131.938: 66.4985% ( 87) 00:08:53.615 16131.938 - 16232.763: 67.4582% ( 78) 00:08:53.615 16232.763 - 16333.588: 68.2456% ( 64) 00:08:53.615 16333.588 - 16434.412: 68.9838% ( 60) 00:08:53.615 16434.412 - 16535.237: 69.6358% ( 53) 00:08:53.615 16535.237 - 16636.062: 70.3494% ( 58) 00:08:53.615 16636.062 - 16736.886: 70.9646% ( 50) 00:08:53.615 16736.886 - 16837.711: 71.8504% ( 72) 00:08:53.615 16837.711 - 16938.535: 72.6624% ( 66) 00:08:53.615 16938.535 - 17039.360: 73.6713% ( 82) 00:08:53.615 17039.360 - 17140.185: 74.5940% ( 75) 00:08:53.615 17140.185 - 17241.009: 75.5044% ( 74) 00:08:53.615 17241.009 - 17341.834: 76.4395% ( 76) 00:08:53.615 17341.834 - 17442.658: 77.3376% ( 73) 00:08:53.615 17442.658 - 17543.483: 78.1742% ( 68) 00:08:53.615 17543.483 - 17644.308: 79.0477% ( 71) 00:08:53.615 17644.308 - 17745.132: 79.8967% ( 69) 00:08:53.615 17745.132 - 17845.957: 80.7456% ( 69) 00:08:53.615 17845.957 - 17946.782: 81.5207% ( 63) 00:08:53.615 17946.782 - 18047.606: 82.2343% ( 58) 00:08:53.615 18047.606 - 18148.431: 82.9601% ( 59) 00:08:53.615 18148.431 - 18249.255: 83.7229% ( 62) 00:08:53.615 18249.255 - 18350.080: 84.4980% ( 63) 00:08:53.615 18350.080 - 18450.905: 85.2239% ( 59) 00:08:53.615 18450.905 - 18551.729: 85.9252% ( 57) 00:08:53.615 18551.729 - 18652.554: 86.7372% ( 66) 00:08:53.615 18652.554 - 18753.378: 87.5861% ( 69) 00:08:53.615 18753.378 - 18854.203: 88.2382% ( 53) 00:08:53.615 18854.203 - 18955.028: 88.9518% ( 58) 00:08:53.615 18955.028 - 19055.852: 89.5300% ( 47) 00:08:53.615 19055.852 - 19156.677: 90.0591% ( 43) 00:08:53.615 19156.677 - 19257.502: 90.5512% ( 40) 00:08:53.615 19257.502 - 19358.326: 91.0556% ( 41) 00:08:53.615 19358.326 - 19459.151: 91.5969% ( 44) 00:08:53.615 19459.151 - 19559.975: 92.0276% ( 35) 00:08:53.615 19559.975 - 19660.800: 92.3967% ( 30) 00:08:53.615 19660.800 - 19761.625: 92.7288% ( 27) 00:08:53.615 19761.625 - 19862.449: 93.1102% ( 31) 00:08:53.615 19862.449 - 19963.274: 93.5039% ( 32) 00:08:53.615 19963.274 - 20064.098: 93.8484% ( 28) 00:08:53.615 20064.098 - 20164.923: 94.2298% ( 31) 00:08:53.615 20164.923 - 20265.748: 94.7589% ( 43) 00:08:53.615 20265.748 - 20366.572: 95.1772% ( 34) 00:08:53.615 20366.572 - 20467.397: 95.4232% ( 20) 00:08:53.615 20467.397 - 20568.222: 95.7800% ( 29) 00:08:53.615 20568.222 - 20669.046: 95.9523% ( 14) 00:08:53.615 20669.046 - 20769.871: 96.1983% ( 20) 00:08:53.615 20769.871 - 20870.695: 96.5059% ( 25) 00:08:53.615 20870.695 - 20971.520: 96.7766% ( 22) 00:08:53.615 20971.520 - 21072.345: 96.8996% ( 10) 00:08:53.615 21072.345 - 21173.169: 97.0719% ( 14) 00:08:53.615 21173.169 - 21273.994: 97.2441% ( 14) 00:08:53.615 21273.994 - 21374.818: 97.4040% ( 13) 00:08:53.615 21374.818 - 21475.643: 97.5394% ( 11) 00:08:53.615 21475.643 - 21576.468: 97.6747% ( 11) 00:08:53.615 21576.468 - 21677.292: 97.8223% ( 12) 00:08:53.615 21677.292 - 21778.117: 97.9577% ( 11) 00:08:53.615 21778.117 - 21878.942: 98.0561% ( 8) 00:08:53.615 21878.942 - 21979.766: 98.1299% ( 6) 00:08:53.615 21979.766 - 22080.591: 98.1914% ( 5) 00:08:53.615 22080.591 - 22181.415: 98.2530% ( 5) 00:08:53.615 22181.415 - 22282.240: 98.3145% ( 5) 00:08:53.615 22282.240 - 22383.065: 98.3760% ( 5) 00:08:53.615 22383.065 - 22483.889: 98.4252% ( 4) 00:08:53.615 23391.311 - 23492.135: 98.4498% ( 2) 00:08:53.615 23492.135 - 23592.960: 98.4867% ( 3) 00:08:53.615 23592.960 - 23693.785: 98.5236% ( 3) 00:08:53.615 23693.785 - 23794.609: 98.5728% ( 4) 00:08:53.615 23794.609 - 23895.434: 98.6097% ( 3) 00:08:53.615 23895.434 - 23996.258: 98.6467% ( 3) 00:08:53.615 23996.258 - 24097.083: 98.6836% ( 3) 00:08:53.615 24097.083 - 24197.908: 98.7205% ( 3) 00:08:53.615 24197.908 - 24298.732: 98.7574% ( 3) 00:08:53.615 24298.732 - 24399.557: 98.7943% ( 3) 00:08:53.615 24399.557 - 24500.382: 98.8312% ( 3) 00:08:53.615 24500.382 - 24601.206: 98.8681% ( 3) 00:08:53.615 24601.206 - 24702.031: 98.9050% ( 3) 00:08:53.615 24702.031 - 24802.855: 98.9419% ( 3) 00:08:53.615 24802.855 - 24903.680: 98.9911% ( 4) 00:08:53.615 24903.680 - 25004.505: 99.0281% ( 3) 00:08:53.615 25004.505 - 25105.329: 99.0773% ( 4) 00:08:53.615 25105.329 - 25206.154: 99.1142% ( 3) 00:08:53.615 25206.154 - 25306.978: 99.1511% ( 3) 00:08:53.615 25306.978 - 25407.803: 99.2003% ( 4) 00:08:53.615 25407.803 - 25508.628: 99.2126% ( 1) 00:08:53.615 31457.280 - 31658.929: 99.2249% ( 1) 00:08:53.615 31658.929 - 31860.578: 99.2987% ( 6) 00:08:53.615 31860.578 - 32062.228: 99.3848% ( 7) 00:08:53.615 32062.228 - 32263.877: 99.4587% ( 6) 00:08:53.615 32263.877 - 32465.526: 99.5448% ( 7) 00:08:53.615 32465.526 - 32667.175: 99.6186% ( 6) 00:08:53.615 32667.175 - 32868.825: 99.7047% ( 7) 00:08:53.615 32868.825 - 33070.474: 99.7908% ( 7) 00:08:53.615 33070.474 - 33272.123: 99.8770% ( 7) 00:08:53.615 33272.123 - 33473.772: 99.9631% ( 7) 00:08:53.615 33473.772 - 33675.422: 100.0000% ( 3) 00:08:53.615 00:08:53.615 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:53.615 ============================================================================== 00:08:53.615 Range in us Cumulative IO count 00:08:53.615 9880.812 - 9931.225: 0.0246% ( 2) 00:08:53.615 9931.225 - 9981.637: 0.0369% ( 1) 00:08:53.615 9981.637 - 10032.049: 0.0615% ( 2) 00:08:53.615 10032.049 - 10082.462: 0.1107% ( 4) 00:08:53.615 10082.462 - 10132.874: 0.1476% ( 3) 00:08:53.615 10132.874 - 10183.286: 0.2092% ( 5) 00:08:53.615 10183.286 - 10233.698: 0.2707% ( 5) 00:08:53.615 10233.698 - 10284.111: 0.5167% ( 20) 00:08:53.615 10284.111 - 10334.523: 0.5782% ( 5) 00:08:53.615 10334.523 - 10384.935: 0.6398% ( 5) 00:08:53.615 10384.935 - 10435.348: 0.6767% ( 3) 00:08:53.615 10435.348 - 10485.760: 0.7013% ( 2) 00:08:53.615 10485.760 - 10536.172: 0.7259% ( 2) 00:08:53.615 10536.172 - 10586.585: 0.7628% ( 3) 00:08:53.615 10586.585 - 10636.997: 0.7874% ( 2) 00:08:53.615 11544.418 - 11594.831: 0.8366% ( 4) 00:08:53.615 11594.831 - 11645.243: 0.8858% ( 4) 00:08:53.615 11645.243 - 11695.655: 0.9843% ( 8) 00:08:53.615 11695.655 - 11746.068: 1.0458% ( 5) 00:08:53.615 11746.068 - 11796.480: 1.1442% ( 8) 00:08:53.615 11796.480 - 11846.892: 1.1934% ( 4) 00:08:53.615 11846.892 - 11897.305: 1.2549% ( 5) 00:08:53.615 11897.305 - 11947.717: 1.3903% ( 11) 00:08:53.615 11947.717 - 11998.129: 1.5748% ( 15) 00:08:53.615 11998.129 - 12048.542: 1.8086% ( 19) 00:08:53.615 12048.542 - 12098.954: 2.0423% ( 19) 00:08:53.615 12098.954 - 12149.366: 2.3991% ( 29) 00:08:53.615 12149.366 - 12199.778: 2.7313% ( 27) 00:08:53.615 12199.778 - 12250.191: 3.3095% ( 47) 00:08:53.615 12250.191 - 12300.603: 4.0231% ( 58) 00:08:53.615 12300.603 - 12351.015: 4.6752% ( 53) 00:08:53.615 12351.015 - 12401.428: 5.2534% ( 47) 00:08:53.615 12401.428 - 12451.840: 5.8071% ( 45) 00:08:53.615 12451.840 - 12502.252: 6.4099% ( 49) 00:08:53.615 12502.252 - 12552.665: 7.0251% ( 50) 00:08:53.615 12552.665 - 12603.077: 7.5787% ( 45) 00:08:53.615 12603.077 - 12653.489: 8.2554% ( 55) 00:08:53.615 12653.489 - 12703.902: 8.9813% ( 59) 00:08:53.615 12703.902 - 12754.314: 9.8917% ( 74) 00:08:53.615 12754.314 - 12804.726: 10.5561% ( 54) 00:08:53.615 12804.726 - 12855.138: 11.3066% ( 61) 00:08:53.615 12855.138 - 12905.551: 12.0448% ( 60) 00:08:53.615 12905.551 - 13006.375: 13.1152% ( 87) 00:08:53.615 13006.375 - 13107.200: 14.2717% ( 94) 00:08:53.615 13107.200 - 13208.025: 15.3420% ( 87) 00:08:53.615 13208.025 - 13308.849: 16.4001% ( 86) 00:08:53.615 13308.849 - 13409.674: 17.7411% ( 109) 00:08:53.615 13409.674 - 13510.498: 19.4021% ( 135) 00:08:53.615 13510.498 - 13611.323: 20.6939% ( 105) 00:08:53.615 13611.323 - 13712.148: 21.8627% ( 95) 00:08:53.615 13712.148 - 13812.972: 23.1545% ( 105) 00:08:53.615 13812.972 - 13913.797: 24.7293% ( 128) 00:08:53.615 13913.797 - 14014.622: 25.9350% ( 98) 00:08:53.615 14014.622 - 14115.446: 28.0020% ( 168) 00:08:53.615 14115.446 - 14216.271: 29.6998% ( 138) 00:08:53.615 14216.271 - 14317.095: 31.3976% ( 138) 00:08:53.615 14317.095 - 14417.920: 33.5015% ( 171) 00:08:53.615 14417.920 - 14518.745: 35.7406% ( 182) 00:08:53.615 14518.745 - 14619.569: 37.8199% ( 169) 00:08:53.615 14619.569 - 14720.394: 40.4528% ( 214) 00:08:53.615 14720.394 - 14821.218: 42.5689% ( 172) 00:08:53.615 14821.218 - 14922.043: 44.3529% ( 145) 00:08:53.615 14922.043 - 15022.868: 46.1368% ( 145) 00:08:53.615 15022.868 - 15123.692: 48.3391% ( 179) 00:08:53.615 15123.692 - 15224.517: 50.1599% ( 148) 00:08:53.615 15224.517 - 15325.342: 51.9193% ( 143) 00:08:53.615 15325.342 - 15426.166: 53.6294% ( 139) 00:08:53.615 15426.166 - 15526.991: 55.0074% ( 112) 00:08:53.615 15526.991 - 15627.815: 56.6191% ( 131) 00:08:53.615 15627.815 - 15728.640: 58.1078% ( 121) 00:08:53.615 15728.640 - 15829.465: 59.4365% ( 108) 00:08:53.615 15829.465 - 15930.289: 60.9867% ( 126) 00:08:53.615 15930.289 - 16031.114: 62.6107% ( 132) 00:08:53.615 16031.114 - 16131.938: 64.0994% ( 121) 00:08:53.615 16131.938 - 16232.763: 65.2805% ( 96) 00:08:53.615 16232.763 - 16333.588: 66.2156% ( 76) 00:08:53.615 16333.588 - 16434.412: 67.0399% ( 67) 00:08:53.615 16434.412 - 16535.237: 67.9503% ( 74) 00:08:53.615 16535.237 - 16636.062: 69.1191% ( 95) 00:08:53.615 16636.062 - 16736.886: 70.4355% ( 107) 00:08:53.615 16736.886 - 16837.711: 72.2564% ( 148) 00:08:53.615 16837.711 - 16938.535: 73.9419% ( 137) 00:08:53.615 16938.535 - 17039.360: 75.1230% ( 96) 00:08:53.615 17039.360 - 17140.185: 76.2549% ( 92) 00:08:53.615 17140.185 - 17241.009: 77.3007% ( 85) 00:08:53.615 17241.009 - 17341.834: 78.1865% ( 72) 00:08:53.615 17341.834 - 17442.658: 79.1216% ( 76) 00:08:53.615 17442.658 - 17543.483: 80.0935% ( 79) 00:08:53.616 17543.483 - 17644.308: 80.9178% ( 67) 00:08:53.616 17644.308 - 17745.132: 81.5207% ( 49) 00:08:53.616 17745.132 - 17845.957: 82.1850% ( 54) 00:08:53.616 17845.957 - 17946.782: 82.8002% ( 50) 00:08:53.616 17946.782 - 18047.606: 83.5753% ( 63) 00:08:53.616 18047.606 - 18148.431: 84.2397% ( 54) 00:08:53.616 18148.431 - 18249.255: 84.8917% ( 53) 00:08:53.616 18249.255 - 18350.080: 85.4946% ( 49) 00:08:53.616 18350.080 - 18450.905: 86.1097% ( 50) 00:08:53.616 18450.905 - 18551.729: 86.7495% ( 52) 00:08:53.616 18551.729 - 18652.554: 87.2170% ( 38) 00:08:53.616 18652.554 - 18753.378: 87.6476% ( 35) 00:08:53.616 18753.378 - 18854.203: 88.0906% ( 36) 00:08:53.616 18854.203 - 18955.028: 88.5458% ( 37) 00:08:53.616 18955.028 - 19055.852: 89.0379% ( 40) 00:08:53.616 19055.852 - 19156.677: 89.3824% ( 28) 00:08:53.616 19156.677 - 19257.502: 89.7638% ( 31) 00:08:53.616 19257.502 - 19358.326: 90.0468% ( 23) 00:08:53.616 19358.326 - 19459.151: 90.3297% ( 23) 00:08:53.616 19459.151 - 19559.975: 90.6988% ( 30) 00:08:53.616 19559.975 - 19660.800: 91.0679% ( 30) 00:08:53.616 19660.800 - 19761.625: 91.4739% ( 33) 00:08:53.616 19761.625 - 19862.449: 91.8676% ( 32) 00:08:53.616 19862.449 - 19963.274: 92.3105% ( 36) 00:08:53.616 19963.274 - 20064.098: 92.7781% ( 38) 00:08:53.616 20064.098 - 20164.923: 93.2456% ( 38) 00:08:53.616 20164.923 - 20265.748: 93.5531% ( 25) 00:08:53.616 20265.748 - 20366.572: 93.9715% ( 34) 00:08:53.616 20366.572 - 20467.397: 94.3652% ( 32) 00:08:53.616 20467.397 - 20568.222: 94.7466% ( 31) 00:08:53.616 20568.222 - 20669.046: 95.1772% ( 35) 00:08:53.616 20669.046 - 20769.871: 95.5094% ( 27) 00:08:53.616 20769.871 - 20870.695: 95.7431% ( 19) 00:08:53.616 20870.695 - 20971.520: 95.9154% ( 14) 00:08:53.616 20971.520 - 21072.345: 96.0630% ( 12) 00:08:53.616 21072.345 - 21173.169: 96.2475% ( 15) 00:08:53.616 21173.169 - 21273.994: 96.4567% ( 17) 00:08:53.616 21273.994 - 21374.818: 96.6535% ( 16) 00:08:53.616 21374.818 - 21475.643: 96.8504% ( 16) 00:08:53.616 21475.643 - 21576.468: 97.0226% ( 14) 00:08:53.616 21576.468 - 21677.292: 97.1949% ( 14) 00:08:53.616 21677.292 - 21778.117: 97.4040% ( 17) 00:08:53.616 21778.117 - 21878.942: 97.5886% ( 15) 00:08:53.616 21878.942 - 21979.766: 97.7116% ( 10) 00:08:53.616 21979.766 - 22080.591: 97.7854% ( 6) 00:08:53.616 22080.591 - 22181.415: 97.8593% ( 6) 00:08:53.616 22181.415 - 22282.240: 97.9331% ( 6) 00:08:53.616 22282.240 - 22383.065: 98.0069% ( 6) 00:08:53.616 22383.065 - 22483.889: 98.0684% ( 5) 00:08:53.616 22483.889 - 22584.714: 98.1545% ( 7) 00:08:53.616 22584.714 - 22685.538: 98.2160% ( 5) 00:08:53.616 22685.538 - 22786.363: 98.2776% ( 5) 00:08:53.616 22786.363 - 22887.188: 98.3268% ( 4) 00:08:53.616 22887.188 - 22988.012: 98.3760% ( 4) 00:08:53.616 22988.012 - 23088.837: 98.4621% ( 7) 00:08:53.616 23088.837 - 23189.662: 98.4990% ( 3) 00:08:53.616 23189.662 - 23290.486: 98.5359% ( 3) 00:08:53.616 23290.486 - 23391.311: 98.5851% ( 4) 00:08:53.616 23391.311 - 23492.135: 98.6220% ( 3) 00:08:53.616 23492.135 - 23592.960: 98.6590% ( 3) 00:08:53.616 23592.960 - 23693.785: 98.6959% ( 3) 00:08:53.616 23693.785 - 23794.609: 98.7451% ( 4) 00:08:53.616 23794.609 - 23895.434: 98.7820% ( 3) 00:08:53.616 23895.434 - 23996.258: 98.8189% ( 3) 00:08:53.616 23996.258 - 24097.083: 98.8681% ( 4) 00:08:53.616 24097.083 - 24197.908: 98.9050% ( 3) 00:08:53.616 24197.908 - 24298.732: 98.9419% ( 3) 00:08:53.616 24298.732 - 24399.557: 98.9788% ( 3) 00:08:53.616 24399.557 - 24500.382: 99.0157% ( 3) 00:08:53.616 24500.382 - 24601.206: 99.0650% ( 4) 00:08:53.616 24601.206 - 24702.031: 99.1019% ( 3) 00:08:53.616 24702.031 - 24802.855: 99.1511% ( 4) 00:08:53.616 24802.855 - 24903.680: 99.1880% ( 3) 00:08:53.616 24903.680 - 25004.505: 99.2126% ( 2) 00:08:53.616 30449.034 - 30650.683: 99.2372% ( 2) 00:08:53.616 30650.683 - 30852.332: 99.3356% ( 8) 00:08:53.616 30852.332 - 31053.982: 99.4218% ( 7) 00:08:53.616 31053.982 - 31255.631: 99.5202% ( 8) 00:08:53.616 31255.631 - 31457.280: 99.6186% ( 8) 00:08:53.616 31457.280 - 31658.929: 99.7047% ( 7) 00:08:53.616 31658.929 - 31860.578: 99.8031% ( 8) 00:08:53.616 31860.578 - 32062.228: 99.8893% ( 7) 00:08:53.616 32062.228 - 32263.877: 99.9877% ( 8) 00:08:53.616 32263.877 - 32465.526: 100.0000% ( 1) 00:08:53.616 00:08:53.616 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:53.616 ============================================================================== 00:08:53.616 Range in us Cumulative IO count 00:08:53.616 9427.102 - 9477.514: 0.0122% ( 1) 00:08:53.616 9578.338 - 9628.751: 0.0488% ( 3) 00:08:53.616 9628.751 - 9679.163: 0.0732% ( 2) 00:08:53.616 9679.163 - 9729.575: 0.0977% ( 2) 00:08:53.616 9729.575 - 9779.988: 0.1343% ( 3) 00:08:53.616 9779.988 - 9830.400: 0.1587% ( 2) 00:08:53.616 9830.400 - 9880.812: 0.2075% ( 4) 00:08:53.616 9880.812 - 9931.225: 0.2319% ( 2) 00:08:53.616 9931.225 - 9981.637: 0.2686% ( 3) 00:08:53.616 9981.637 - 10032.049: 0.3662% ( 8) 00:08:53.616 10032.049 - 10082.462: 0.5859% ( 18) 00:08:53.616 10082.462 - 10132.874: 0.6104% ( 2) 00:08:53.616 10132.874 - 10183.286: 0.6348% ( 2) 00:08:53.616 10183.286 - 10233.698: 0.6592% ( 2) 00:08:53.616 10233.698 - 10284.111: 0.6836% ( 2) 00:08:53.616 10284.111 - 10334.523: 0.7080% ( 2) 00:08:53.616 10334.523 - 10384.935: 0.7324% ( 2) 00:08:53.616 10384.935 - 10435.348: 0.7568% ( 2) 00:08:53.616 10435.348 - 10485.760: 0.7690% ( 1) 00:08:53.616 10485.760 - 10536.172: 0.7812% ( 1) 00:08:53.616 11443.594 - 11494.006: 0.8057% ( 2) 00:08:53.616 11494.006 - 11544.418: 0.8423% ( 3) 00:08:53.616 11544.418 - 11594.831: 0.8667% ( 2) 00:08:53.616 11594.831 - 11645.243: 0.9155% ( 4) 00:08:53.616 11645.243 - 11695.655: 0.9766% ( 5) 00:08:53.616 11695.655 - 11746.068: 1.0620% ( 7) 00:08:53.616 11746.068 - 11796.480: 1.1230% ( 5) 00:08:53.616 11796.480 - 11846.892: 1.2695% ( 12) 00:08:53.616 11846.892 - 11897.305: 1.3672% ( 8) 00:08:53.616 11897.305 - 11947.717: 1.4526% ( 7) 00:08:53.616 11947.717 - 11998.129: 1.5747% ( 10) 00:08:53.616 11998.129 - 12048.542: 1.7822% ( 17) 00:08:53.616 12048.542 - 12098.954: 2.0386% ( 21) 00:08:53.616 12098.954 - 12149.366: 2.3682% ( 27) 00:08:53.616 12149.366 - 12199.778: 2.8564% ( 40) 00:08:53.616 12199.778 - 12250.191: 3.2471% ( 32) 00:08:53.616 12250.191 - 12300.603: 3.6499% ( 33) 00:08:53.616 12300.603 - 12351.015: 4.0161% ( 30) 00:08:53.616 12351.015 - 12401.428: 4.3823% ( 30) 00:08:53.616 12401.428 - 12451.840: 4.7852% ( 33) 00:08:53.616 12451.840 - 12502.252: 5.1514% ( 30) 00:08:53.616 12502.252 - 12552.665: 5.6152% ( 38) 00:08:53.616 12552.665 - 12603.077: 5.9692% ( 29) 00:08:53.616 12603.077 - 12653.489: 6.3843% ( 34) 00:08:53.616 12653.489 - 12703.902: 6.8481% ( 38) 00:08:53.616 12703.902 - 12754.314: 7.6660% ( 67) 00:08:53.616 12754.314 - 12804.726: 8.2886% ( 51) 00:08:53.616 12804.726 - 12855.138: 8.6914% ( 33) 00:08:53.616 12855.138 - 12905.551: 9.1675% ( 39) 00:08:53.616 12905.551 - 13006.375: 10.7788% ( 132) 00:08:53.616 13006.375 - 13107.200: 12.5122% ( 142) 00:08:53.616 13107.200 - 13208.025: 14.3555% ( 151) 00:08:53.616 13208.025 - 13308.849: 16.4917% ( 175) 00:08:53.616 13308.849 - 13409.674: 17.9321% ( 118) 00:08:53.616 13409.674 - 13510.498: 19.3848% ( 119) 00:08:53.616 13510.498 - 13611.323: 20.5688% ( 97) 00:08:53.616 13611.323 - 13712.148: 21.9116% ( 110) 00:08:53.616 13712.148 - 13812.972: 23.3398% ( 117) 00:08:53.616 13812.972 - 13913.797: 25.0610% ( 141) 00:08:53.616 13913.797 - 14014.622: 26.8433% ( 146) 00:08:53.616 14014.622 - 14115.446: 28.7231% ( 154) 00:08:53.616 14115.446 - 14216.271: 30.6274% ( 156) 00:08:53.616 14216.271 - 14317.095: 32.3975% ( 145) 00:08:53.616 14317.095 - 14417.920: 34.1797% ( 146) 00:08:53.617 14417.920 - 14518.745: 36.0107% ( 150) 00:08:53.617 14518.745 - 14619.569: 37.7686% ( 144) 00:08:53.617 14619.569 - 14720.394: 39.6851% ( 157) 00:08:53.617 14720.394 - 14821.218: 41.9189% ( 183) 00:08:53.617 14821.218 - 14922.043: 44.0918% ( 178) 00:08:53.617 14922.043 - 15022.868: 45.8740% ( 146) 00:08:53.617 15022.868 - 15123.692: 47.9370% ( 169) 00:08:53.617 15123.692 - 15224.517: 49.7314% ( 147) 00:08:53.617 15224.517 - 15325.342: 51.6235% ( 155) 00:08:53.617 15325.342 - 15426.166: 53.3569% ( 142) 00:08:53.617 15426.166 - 15526.991: 55.0293% ( 137) 00:08:53.617 15526.991 - 15627.815: 56.7627% ( 142) 00:08:53.617 15627.815 - 15728.640: 58.1421% ( 113) 00:08:53.617 15728.640 - 15829.465: 59.4604% ( 108) 00:08:53.617 15829.465 - 15930.289: 60.7910% ( 109) 00:08:53.617 15930.289 - 16031.114: 62.3901% ( 131) 00:08:53.617 16031.114 - 16131.938: 64.3921% ( 164) 00:08:53.617 16131.938 - 16232.763: 66.1865% ( 147) 00:08:53.617 16232.763 - 16333.588: 67.6880% ( 123) 00:08:53.617 16333.588 - 16434.412: 69.0796% ( 114) 00:08:53.617 16434.412 - 16535.237: 70.5322% ( 119) 00:08:53.617 16535.237 - 16636.062: 71.7651% ( 101) 00:08:53.617 16636.062 - 16736.886: 72.9126% ( 94) 00:08:53.617 16736.886 - 16837.711: 74.0723% ( 95) 00:08:53.617 16837.711 - 16938.535: 75.1587% ( 89) 00:08:53.617 16938.535 - 17039.360: 76.5015% ( 110) 00:08:53.617 17039.360 - 17140.185: 77.5269% ( 84) 00:08:53.617 17140.185 - 17241.009: 78.3447% ( 67) 00:08:53.617 17241.009 - 17341.834: 79.2236% ( 72) 00:08:53.617 17341.834 - 17442.658: 79.9561% ( 60) 00:08:53.617 17442.658 - 17543.483: 80.6152% ( 54) 00:08:53.617 17543.483 - 17644.308: 81.5430% ( 76) 00:08:53.617 17644.308 - 17745.132: 82.5195% ( 80) 00:08:53.617 17745.132 - 17845.957: 83.8135% ( 106) 00:08:53.617 17845.957 - 17946.782: 84.7900% ( 80) 00:08:53.617 17946.782 - 18047.606: 85.6323% ( 69) 00:08:53.617 18047.606 - 18148.431: 86.2305% ( 49) 00:08:53.617 18148.431 - 18249.255: 86.9507% ( 59) 00:08:53.617 18249.255 - 18350.080: 87.7319% ( 64) 00:08:53.617 18350.080 - 18450.905: 88.3789% ( 53) 00:08:53.617 18450.905 - 18551.729: 88.8794% ( 41) 00:08:53.617 18551.729 - 18652.554: 89.3677% ( 40) 00:08:53.617 18652.554 - 18753.378: 89.9658% ( 49) 00:08:53.617 18753.378 - 18854.203: 90.4419% ( 39) 00:08:53.617 18854.203 - 18955.028: 90.7959% ( 29) 00:08:53.617 18955.028 - 19055.852: 91.1987% ( 33) 00:08:53.617 19055.852 - 19156.677: 91.5649% ( 30) 00:08:53.617 19156.677 - 19257.502: 91.8457% ( 23) 00:08:53.617 19257.502 - 19358.326: 92.1021% ( 21) 00:08:53.617 19358.326 - 19459.151: 92.3340% ( 19) 00:08:53.617 19459.151 - 19559.975: 92.6147% ( 23) 00:08:53.617 19559.975 - 19660.800: 93.0298% ( 34) 00:08:53.617 19660.800 - 19761.625: 93.3716% ( 28) 00:08:53.617 19761.625 - 19862.449: 93.6890% ( 26) 00:08:53.617 19862.449 - 19963.274: 93.9209% ( 19) 00:08:53.617 19963.274 - 20064.098: 94.1406% ( 18) 00:08:53.617 20064.098 - 20164.923: 94.4580% ( 26) 00:08:53.617 20164.923 - 20265.748: 94.7144% ( 21) 00:08:53.617 20265.748 - 20366.572: 94.9951% ( 23) 00:08:53.617 20366.572 - 20467.397: 95.2515% ( 21) 00:08:53.617 20467.397 - 20568.222: 95.4956% ( 20) 00:08:53.617 20568.222 - 20669.046: 95.6665% ( 14) 00:08:53.617 20669.046 - 20769.871: 95.9106% ( 20) 00:08:53.617 20769.871 - 20870.695: 96.1670% ( 21) 00:08:53.617 20870.695 - 20971.520: 96.4233% ( 21) 00:08:53.617 20971.520 - 21072.345: 96.6919% ( 22) 00:08:53.617 21072.345 - 21173.169: 96.8750% ( 15) 00:08:53.617 21173.169 - 21273.994: 97.1924% ( 26) 00:08:53.617 21273.994 - 21374.818: 97.4487% ( 21) 00:08:53.617 21374.818 - 21475.643: 97.6807% ( 19) 00:08:53.617 21475.643 - 21576.468: 97.9126% ( 19) 00:08:53.617 21576.468 - 21677.292: 98.1323% ( 18) 00:08:53.617 21677.292 - 21778.117: 98.2666% ( 11) 00:08:53.617 21778.117 - 21878.942: 98.4009% ( 11) 00:08:53.617 21878.942 - 21979.766: 98.5352% ( 11) 00:08:53.617 21979.766 - 22080.591: 98.6572% ( 10) 00:08:53.617 22080.591 - 22181.415: 98.7793% ( 10) 00:08:53.617 22181.415 - 22282.240: 98.8770% ( 8) 00:08:53.617 22282.240 - 22383.065: 98.9746% ( 8) 00:08:53.617 22383.065 - 22483.889: 99.0234% ( 4) 00:08:53.617 22483.889 - 22584.714: 99.0845% ( 5) 00:08:53.617 22584.714 - 22685.538: 99.1333% ( 4) 00:08:53.617 22685.538 - 22786.363: 99.1699% ( 3) 00:08:53.617 22786.363 - 22887.188: 99.2065% ( 3) 00:08:53.617 22887.188 - 22988.012: 99.2188% ( 1) 00:08:53.617 23088.837 - 23189.662: 99.2310% ( 1) 00:08:53.617 23189.662 - 23290.486: 99.2676% ( 3) 00:08:53.617 23290.486 - 23391.311: 99.3164% ( 4) 00:08:53.617 23391.311 - 23492.135: 99.3408% ( 2) 00:08:53.617 23492.135 - 23592.960: 99.3896% ( 4) 00:08:53.617 23592.960 - 23693.785: 99.4263% ( 3) 00:08:53.617 23693.785 - 23794.609: 99.4629% ( 3) 00:08:53.617 23794.609 - 23895.434: 99.5117% ( 4) 00:08:53.617 23895.434 - 23996.258: 99.5483% ( 3) 00:08:53.617 23996.258 - 24097.083: 99.5850% ( 3) 00:08:53.617 24097.083 - 24197.908: 99.6338% ( 4) 00:08:53.617 24197.908 - 24298.732: 99.6704% ( 3) 00:08:53.617 24298.732 - 24399.557: 99.7070% ( 3) 00:08:53.617 24399.557 - 24500.382: 99.7437% ( 3) 00:08:53.617 24500.382 - 24601.206: 99.7925% ( 4) 00:08:53.617 24601.206 - 24702.031: 99.8291% ( 3) 00:08:53.617 24702.031 - 24802.855: 99.8779% ( 4) 00:08:53.617 24802.855 - 24903.680: 99.9146% ( 3) 00:08:53.617 24903.680 - 25004.505: 99.9512% ( 3) 00:08:53.617 25004.505 - 25105.329: 100.0000% ( 4) 00:08:53.617 00:08:53.617 10:07:56 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:53.617 00:08:53.617 real 0m2.528s 00:08:53.617 user 0m2.187s 00:08:53.617 sys 0m0.219s 00:08:53.617 10:07:56 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.617 10:07:56 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:53.617 ************************************ 00:08:53.617 END TEST nvme_perf 00:08:53.617 ************************************ 00:08:53.875 10:07:56 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:53.875 10:07:56 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:53.875 10:07:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.875 10:07:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.875 ************************************ 00:08:53.875 START TEST nvme_hello_world 00:08:53.875 ************************************ 00:08:53.875 10:07:56 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:53.875 Initializing NVMe Controllers 00:08:53.875 Attached to 0000:00:11.0 00:08:53.875 Namespace ID: 1 size: 5GB 00:08:53.875 Attached to 0000:00:13.0 00:08:53.875 Namespace ID: 1 size: 1GB 00:08:53.875 Attached to 0000:00:10.0 00:08:53.875 Namespace ID: 1 size: 6GB 00:08:53.875 Attached to 0000:00:12.0 00:08:53.875 Namespace ID: 1 size: 4GB 00:08:53.875 Namespace ID: 2 size: 4GB 00:08:53.875 Namespace ID: 3 size: 4GB 00:08:53.875 Initialization complete. 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:53.875 INFO: using host memory buffer for IO 00:08:53.875 Hello world! 00:08:54.133 00:08:54.133 real 0m0.249s 00:08:54.133 user 0m0.102s 00:08:54.133 sys 0m0.096s 00:08:54.133 10:07:57 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.133 ************************************ 00:08:54.133 END TEST nvme_hello_world 00:08:54.133 ************************************ 00:08:54.133 10:07:57 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:54.133 10:07:57 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:54.133 10:07:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.133 10:07:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.133 10:07:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.133 ************************************ 00:08:54.133 START TEST nvme_sgl 00:08:54.133 ************************************ 00:08:54.133 10:07:57 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:54.391 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:54.391 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:54.391 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:54.391 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:54.391 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:54.391 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:54.391 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:54.391 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:54.391 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:54.391 NVMe Readv/Writev Request test 00:08:54.391 Attached to 0000:00:11.0 00:08:54.391 Attached to 0000:00:13.0 00:08:54.391 Attached to 0000:00:10.0 00:08:54.391 Attached to 0000:00:12.0 00:08:54.391 0000:00:11.0: build_io_request_2 test passed 00:08:54.391 0000:00:11.0: build_io_request_4 test passed 00:08:54.391 0000:00:11.0: build_io_request_5 test passed 00:08:54.391 0000:00:11.0: build_io_request_6 test passed 00:08:54.391 0000:00:11.0: build_io_request_7 test passed 00:08:54.391 0000:00:11.0: build_io_request_10 test passed 00:08:54.391 0000:00:10.0: build_io_request_2 test passed 00:08:54.391 0000:00:10.0: build_io_request_4 test passed 00:08:54.391 0000:00:10.0: build_io_request_5 test passed 00:08:54.391 0000:00:10.0: build_io_request_6 test passed 00:08:54.391 0000:00:10.0: build_io_request_7 test passed 00:08:54.391 0000:00:10.0: build_io_request_10 test passed 00:08:54.391 Cleaning up... 00:08:54.391 00:08:54.391 real 0m0.346s 00:08:54.391 user 0m0.193s 00:08:54.391 sys 0m0.098s 00:08:54.391 10:07:57 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.391 ************************************ 00:08:54.391 END TEST nvme_sgl 00:08:54.391 ************************************ 00:08:54.391 10:07:57 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:54.391 10:07:57 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:54.391 10:07:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.391 10:07:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.391 10:07:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.391 ************************************ 00:08:54.391 START TEST nvme_e2edp 00:08:54.391 ************************************ 00:08:54.391 10:07:57 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:54.649 NVMe Write/Read with End-to-End data protection test 00:08:54.649 Attached to 0000:00:11.0 00:08:54.649 Attached to 0000:00:13.0 00:08:54.649 Attached to 0000:00:10.0 00:08:54.649 Attached to 0000:00:12.0 00:08:54.649 Cleaning up... 00:08:54.649 00:08:54.649 real 0m0.225s 00:08:54.649 user 0m0.071s 00:08:54.649 sys 0m0.105s 00:08:54.649 10:07:57 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.649 ************************************ 00:08:54.649 END TEST nvme_e2edp 00:08:54.649 ************************************ 00:08:54.650 10:07:57 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:54.908 10:07:57 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:54.908 10:07:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.908 10:07:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.908 10:07:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.908 ************************************ 00:08:54.908 START TEST nvme_reserve 00:08:54.908 ************************************ 00:08:54.908 10:07:57 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:54.908 ===================================================== 00:08:54.908 NVMe Controller at PCI bus 0, device 17, function 0 00:08:54.908 ===================================================== 00:08:54.908 Reservations: Not Supported 00:08:54.908 ===================================================== 00:08:54.908 NVMe Controller at PCI bus 0, device 19, function 0 00:08:54.908 ===================================================== 00:08:54.908 Reservations: Not Supported 00:08:54.908 ===================================================== 00:08:54.908 NVMe Controller at PCI bus 0, device 16, function 0 00:08:54.908 ===================================================== 00:08:54.908 Reservations: Not Supported 00:08:54.908 ===================================================== 00:08:54.908 NVMe Controller at PCI bus 0, device 18, function 0 00:08:54.908 ===================================================== 00:08:54.908 Reservations: Not Supported 00:08:54.908 Reservation test passed 00:08:54.908 00:08:54.908 real 0m0.223s 00:08:54.908 user 0m0.065s 00:08:54.908 sys 0m0.111s 00:08:54.908 ************************************ 00:08:54.908 END TEST nvme_reserve 00:08:54.908 ************************************ 00:08:54.908 10:07:57 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.908 10:07:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:55.166 10:07:58 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:55.166 10:07:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.166 10:07:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.166 10:07:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.166 ************************************ 00:08:55.166 START TEST nvme_err_injection 00:08:55.166 ************************************ 00:08:55.166 10:07:58 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:55.424 NVMe Error Injection test 00:08:55.424 Attached to 0000:00:11.0 00:08:55.424 Attached to 0000:00:13.0 00:08:55.424 Attached to 0000:00:10.0 00:08:55.424 Attached to 0000:00:12.0 00:08:55.424 0000:00:11.0: get features failed as expected 00:08:55.424 0000:00:13.0: get features failed as expected 00:08:55.424 0000:00:10.0: get features failed as expected 00:08:55.424 0000:00:12.0: get features failed as expected 00:08:55.424 0000:00:11.0: get features successfully as expected 00:08:55.424 0000:00:13.0: get features successfully as expected 00:08:55.424 0000:00:10.0: get features successfully as expected 00:08:55.424 0000:00:12.0: get features successfully as expected 00:08:55.424 0000:00:11.0: read failed as expected 00:08:55.424 0000:00:13.0: read failed as expected 00:08:55.424 0000:00:10.0: read failed as expected 00:08:55.424 0000:00:12.0: read failed as expected 00:08:55.424 0000:00:12.0: read successfully as expected 00:08:55.424 0000:00:10.0: read successfully as expected 00:08:55.424 0000:00:13.0: read successfully as expected 00:08:55.424 0000:00:11.0: read successfully as expected 00:08:55.424 Cleaning up... 00:08:55.424 00:08:55.424 real 0m0.218s 00:08:55.424 user 0m0.085s 00:08:55.424 sys 0m0.090s 00:08:55.424 10:07:58 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.424 10:07:58 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 ************************************ 00:08:55.424 END TEST nvme_err_injection 00:08:55.424 ************************************ 00:08:55.424 10:07:58 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:55.424 10:07:58 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:55.424 10:07:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.424 10:07:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 ************************************ 00:08:55.424 START TEST nvme_overhead 00:08:55.424 ************************************ 00:08:55.424 10:07:58 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:56.799 Initializing NVMe Controllers 00:08:56.799 Attached to 0000:00:11.0 00:08:56.799 Attached to 0000:00:13.0 00:08:56.799 Attached to 0000:00:10.0 00:08:56.799 Attached to 0000:00:12.0 00:08:56.799 Initialization complete. Launching workers. 00:08:56.799 submit (in ns) avg, min, max = 11505.0, 10143.1, 309188.5 00:08:56.799 complete (in ns) avg, min, max = 7757.8, 7167.7, 861687.7 00:08:56.799 00:08:56.799 Submit histogram 00:08:56.799 ================ 00:08:56.799 Range in us Cumulative Count 00:08:56.799 10.142 - 10.191: 0.0088% ( 1) 00:08:56.799 10.388 - 10.437: 0.0175% ( 1) 00:08:56.799 10.585 - 10.634: 0.0263% ( 1) 00:08:56.799 10.732 - 10.782: 0.0876% ( 7) 00:08:56.799 10.782 - 10.831: 0.6309% ( 62) 00:08:56.799 10.831 - 10.880: 3.3912% ( 315) 00:08:56.799 10.880 - 10.929: 10.7606% ( 841) 00:08:56.799 10.929 - 10.978: 23.0109% ( 1398) 00:08:56.799 10.978 - 11.028: 36.9085% ( 1586) 00:08:56.799 11.028 - 11.077: 48.8784% ( 1366) 00:08:56.799 11.077 - 11.126: 58.5086% ( 1099) 00:08:56.799 11.126 - 11.175: 65.0456% ( 746) 00:08:56.799 11.175 - 11.225: 69.9001% ( 554) 00:08:56.799 11.225 - 11.274: 73.4402% ( 404) 00:08:56.799 11.274 - 11.323: 76.4108% ( 339) 00:08:56.799 11.323 - 11.372: 79.1798% ( 316) 00:08:56.799 11.372 - 11.422: 81.4844% ( 263) 00:08:56.799 11.422 - 11.471: 83.3509% ( 213) 00:08:56.799 11.471 - 11.520: 84.9982% ( 188) 00:08:56.799 11.520 - 11.569: 86.1111% ( 127) 00:08:56.799 11.569 - 11.618: 86.8735% ( 87) 00:08:56.799 11.618 - 11.668: 87.6183% ( 85) 00:08:56.799 11.668 - 11.717: 88.1528% ( 61) 00:08:56.799 11.717 - 11.766: 88.6961% ( 62) 00:08:56.799 11.766 - 11.815: 89.3183% ( 71) 00:08:56.799 11.815 - 11.865: 90.1157% ( 91) 00:08:56.799 11.865 - 11.914: 91.0533% ( 107) 00:08:56.799 11.914 - 11.963: 91.8594% ( 92) 00:08:56.799 11.963 - 12.012: 92.6218% ( 87) 00:08:56.799 12.012 - 12.062: 93.2001% ( 66) 00:08:56.799 12.062 - 12.111: 93.6733% ( 54) 00:08:56.799 12.111 - 12.160: 94.1115% ( 50) 00:08:56.799 12.160 - 12.209: 94.4094% ( 34) 00:08:56.799 12.209 - 12.258: 94.6197% ( 24) 00:08:56.799 12.258 - 12.308: 94.7687% ( 17) 00:08:56.799 12.308 - 12.357: 94.8651% ( 11) 00:08:56.799 12.357 - 12.406: 94.9702% ( 12) 00:08:56.799 12.406 - 12.455: 95.0666% ( 11) 00:08:56.799 12.455 - 12.505: 95.1279% ( 7) 00:08:56.799 12.505 - 12.554: 95.1805% ( 6) 00:08:56.799 12.554 - 12.603: 95.2068% ( 3) 00:08:56.799 12.603 - 12.702: 95.2594% ( 6) 00:08:56.799 12.702 - 12.800: 95.3295% ( 8) 00:08:56.799 12.800 - 12.898: 95.3733% ( 5) 00:08:56.799 12.898 - 12.997: 95.3908% ( 2) 00:08:56.799 12.997 - 13.095: 95.4609% ( 8) 00:08:56.799 13.095 - 13.194: 95.5310% ( 8) 00:08:56.799 13.194 - 13.292: 95.5836% ( 6) 00:08:56.799 13.292 - 13.391: 95.6625% ( 9) 00:08:56.799 13.391 - 13.489: 95.7326% ( 8) 00:08:56.799 13.489 - 13.588: 95.8114% ( 9) 00:08:56.799 13.588 - 13.686: 95.9166% ( 12) 00:08:56.799 13.686 - 13.785: 95.9779% ( 7) 00:08:56.799 13.785 - 13.883: 96.0480% ( 8) 00:08:56.799 13.883 - 13.982: 96.1444% ( 11) 00:08:56.799 13.982 - 14.080: 96.2145% ( 8) 00:08:56.799 14.080 - 14.178: 96.2934% ( 9) 00:08:56.799 14.178 - 14.277: 96.3109% ( 2) 00:08:56.799 14.277 - 14.375: 96.3722% ( 7) 00:08:56.799 14.375 - 14.474: 96.4248% ( 6) 00:08:56.799 14.474 - 14.572: 96.5387% ( 13) 00:08:56.799 14.572 - 14.671: 96.5825% ( 5) 00:08:56.799 14.671 - 14.769: 96.6439% ( 7) 00:08:56.799 14.769 - 14.868: 96.6965% ( 6) 00:08:56.799 14.868 - 14.966: 96.7490% ( 6) 00:08:56.799 14.966 - 15.065: 96.8104% ( 7) 00:08:56.799 15.065 - 15.163: 96.8454% ( 4) 00:08:56.799 15.163 - 15.262: 96.9243% ( 9) 00:08:56.799 15.262 - 15.360: 96.9769% ( 6) 00:08:56.799 15.360 - 15.458: 97.0032% ( 3) 00:08:56.799 15.458 - 15.557: 97.0294% ( 3) 00:08:56.799 15.557 - 15.655: 97.0557% ( 3) 00:08:56.799 15.655 - 15.754: 97.1258% ( 8) 00:08:56.799 15.754 - 15.852: 97.1521% ( 3) 00:08:56.799 15.852 - 15.951: 97.2397% ( 10) 00:08:56.799 15.951 - 16.049: 97.2836% ( 5) 00:08:56.799 16.049 - 16.148: 97.3361% ( 6) 00:08:56.799 16.148 - 16.246: 97.3887% ( 6) 00:08:56.799 16.246 - 16.345: 97.4413% ( 6) 00:08:56.799 16.345 - 16.443: 97.4763% ( 4) 00:08:56.799 16.443 - 16.542: 97.5202% ( 5) 00:08:56.799 16.542 - 16.640: 97.5464% ( 3) 00:08:56.799 16.640 - 16.738: 97.5815% ( 4) 00:08:56.799 16.738 - 16.837: 97.6253% ( 5) 00:08:56.799 16.837 - 16.935: 97.6516% ( 3) 00:08:56.799 16.935 - 17.034: 97.7042% ( 6) 00:08:56.799 17.034 - 17.132: 97.7743% ( 8) 00:08:56.799 17.132 - 17.231: 97.8093% ( 4) 00:08:56.799 17.231 - 17.329: 97.9145% ( 12) 00:08:56.799 17.329 - 17.428: 97.9671% ( 6) 00:08:56.799 17.428 - 17.526: 98.0372% ( 8) 00:08:56.799 17.526 - 17.625: 98.1160% ( 9) 00:08:56.799 17.625 - 17.723: 98.1949% ( 9) 00:08:56.799 17.723 - 17.822: 98.2387% ( 5) 00:08:56.799 17.822 - 17.920: 98.3438% ( 12) 00:08:56.799 17.920 - 18.018: 98.4402% ( 11) 00:08:56.799 18.018 - 18.117: 98.5103% ( 8) 00:08:56.799 18.117 - 18.215: 98.6418% ( 15) 00:08:56.799 18.215 - 18.314: 98.7469% ( 12) 00:08:56.799 18.314 - 18.412: 98.8871% ( 16) 00:08:56.799 18.412 - 18.511: 98.9572% ( 8) 00:08:56.799 18.511 - 18.609: 99.0273% ( 8) 00:08:56.799 18.609 - 18.708: 99.1150% ( 10) 00:08:56.799 18.708 - 18.806: 99.1851% ( 8) 00:08:56.799 18.806 - 18.905: 99.2727% ( 10) 00:08:56.799 18.905 - 19.003: 99.3253% ( 6) 00:08:56.799 19.003 - 19.102: 99.3691% ( 5) 00:08:56.799 19.102 - 19.200: 99.3954% ( 3) 00:08:56.799 19.200 - 19.298: 99.4304% ( 4) 00:08:56.799 19.397 - 19.495: 99.4567% ( 3) 00:08:56.799 19.495 - 19.594: 99.4918% ( 4) 00:08:56.799 19.594 - 19.692: 99.5005% ( 1) 00:08:56.799 19.692 - 19.791: 99.5093% ( 1) 00:08:56.799 19.791 - 19.889: 99.5443% ( 4) 00:08:56.799 19.988 - 20.086: 99.5706% ( 3) 00:08:56.799 20.086 - 20.185: 99.5794% ( 1) 00:08:56.799 20.185 - 20.283: 99.5969% ( 2) 00:08:56.799 20.283 - 20.382: 99.6057% ( 1) 00:08:56.799 20.480 - 20.578: 99.6407% ( 4) 00:08:56.799 20.578 - 20.677: 99.6495% ( 1) 00:08:56.799 20.677 - 20.775: 99.6670% ( 2) 00:08:56.799 20.972 - 21.071: 99.6758% ( 1) 00:08:56.799 21.071 - 21.169: 99.6845% ( 1) 00:08:56.799 21.268 - 21.366: 99.6933% ( 1) 00:08:56.799 21.465 - 21.563: 99.7021% ( 1) 00:08:56.799 21.563 - 21.662: 99.7196% ( 2) 00:08:56.799 21.662 - 21.760: 99.7284% ( 1) 00:08:56.799 21.760 - 21.858: 99.7459% ( 2) 00:08:56.799 22.154 - 22.252: 99.7546% ( 1) 00:08:56.799 22.351 - 22.449: 99.7634% ( 1) 00:08:56.799 22.745 - 22.843: 99.7722% ( 1) 00:08:56.799 22.843 - 22.942: 99.7809% ( 1) 00:08:56.799 23.434 - 23.532: 99.7897% ( 1) 00:08:56.799 23.926 - 24.025: 99.7985% ( 1) 00:08:56.799 24.517 - 24.615: 99.8072% ( 1) 00:08:56.799 24.812 - 24.911: 99.8160% ( 1) 00:08:56.799 25.403 - 25.600: 99.8335% ( 2) 00:08:56.799 26.191 - 26.388: 99.8423% ( 1) 00:08:56.799 26.388 - 26.585: 99.8510% ( 1) 00:08:56.799 26.978 - 27.175: 99.8598% ( 1) 00:08:56.799 27.175 - 27.372: 99.8686% ( 1) 00:08:56.799 27.569 - 27.766: 99.8773% ( 1) 00:08:56.799 28.751 - 28.948: 99.8948% ( 2) 00:08:56.800 31.114 - 31.311: 99.9036% ( 1) 00:08:56.800 32.492 - 32.689: 99.9211% ( 2) 00:08:56.800 34.658 - 34.855: 99.9299% ( 1) 00:08:56.800 45.095 - 45.292: 99.9474% ( 2) 00:08:56.800 65.772 - 66.166: 99.9562% ( 1) 00:08:56.800 67.742 - 68.135: 99.9649% ( 1) 00:08:56.800 70.498 - 70.892: 99.9737% ( 1) 00:08:56.800 76.012 - 76.406: 99.9825% ( 1) 00:08:56.800 252.062 - 253.637: 99.9912% ( 1) 00:08:56.800 308.775 - 310.351: 100.0000% ( 1) 00:08:56.800 00:08:56.800 Complete histogram 00:08:56.800 ================== 00:08:56.800 Range in us Cumulative Count 00:08:56.800 7.138 - 7.188: 0.0088% ( 1) 00:08:56.800 7.188 - 7.237: 0.0438% ( 4) 00:08:56.800 7.237 - 7.286: 0.7886% ( 85) 00:08:56.800 7.286 - 7.335: 5.9061% ( 584) 00:08:56.800 7.335 - 7.385: 19.8563% ( 1592) 00:08:56.800 7.385 - 7.434: 39.1342% ( 2200) 00:08:56.800 7.434 - 7.483: 56.0550% ( 1931) 00:08:56.800 7.483 - 7.532: 70.1104% ( 1604) 00:08:56.800 7.532 - 7.582: 79.0571% ( 1021) 00:08:56.800 7.582 - 7.631: 84.8055% ( 656) 00:08:56.800 7.631 - 7.680: 88.4770% ( 419) 00:08:56.800 7.680 - 7.729: 90.6590% ( 249) 00:08:56.800 7.729 - 7.778: 92.1486% ( 170) 00:08:56.800 7.778 - 7.828: 93.2264% ( 123) 00:08:56.800 7.828 - 7.877: 93.9713% ( 85) 00:08:56.800 7.877 - 7.926: 94.5934% ( 71) 00:08:56.800 7.926 - 7.975: 95.0140% ( 48) 00:08:56.800 7.975 - 8.025: 95.3558% ( 39) 00:08:56.800 8.025 - 8.074: 95.6625% ( 35) 00:08:56.800 8.074 - 8.123: 95.9166% ( 29) 00:08:56.800 8.123 - 8.172: 96.1181% ( 23) 00:08:56.800 8.172 - 8.222: 96.3021% ( 21) 00:08:56.800 8.222 - 8.271: 96.5124% ( 24) 00:08:56.800 8.271 - 8.320: 96.8630% ( 40) 00:08:56.800 8.320 - 8.369: 97.0119% ( 17) 00:08:56.800 8.369 - 8.418: 97.1784% ( 19) 00:08:56.800 8.418 - 8.468: 97.4501% ( 31) 00:08:56.800 8.468 - 8.517: 97.6516% ( 23) 00:08:56.800 8.517 - 8.566: 97.7305% ( 9) 00:08:56.800 8.566 - 8.615: 97.7918% ( 7) 00:08:56.800 8.615 - 8.665: 97.8619% ( 8) 00:08:56.800 8.665 - 8.714: 97.9232% ( 7) 00:08:56.800 8.714 - 8.763: 97.9758% ( 6) 00:08:56.800 8.763 - 8.812: 97.9933% ( 2) 00:08:56.800 8.812 - 8.862: 98.0021% ( 1) 00:08:56.800 8.862 - 8.911: 98.0109% ( 1) 00:08:56.800 8.960 - 9.009: 98.0284% ( 2) 00:08:56.800 9.009 - 9.058: 98.0459% ( 2) 00:08:56.800 9.058 - 9.108: 98.0547% ( 1) 00:08:56.800 9.108 - 9.157: 98.0634% ( 1) 00:08:56.800 9.157 - 9.206: 98.0722% ( 1) 00:08:56.800 9.206 - 9.255: 98.1073% ( 4) 00:08:56.800 9.354 - 9.403: 98.1160% ( 1) 00:08:56.800 9.452 - 9.502: 98.1248% ( 1) 00:08:56.800 9.649 - 9.698: 98.1423% ( 2) 00:08:56.800 9.698 - 9.748: 98.1511% ( 1) 00:08:56.800 9.748 - 9.797: 98.1598% ( 1) 00:08:56.800 9.797 - 9.846: 98.1774% ( 2) 00:08:56.800 9.846 - 9.895: 98.1861% ( 1) 00:08:56.800 9.895 - 9.945: 98.1949% ( 1) 00:08:56.800 10.043 - 10.092: 98.2299% ( 4) 00:08:56.800 10.092 - 10.142: 98.2650% ( 4) 00:08:56.800 10.191 - 10.240: 98.2913% ( 3) 00:08:56.800 10.240 - 10.289: 98.3088% ( 2) 00:08:56.800 10.289 - 10.338: 98.3438% ( 4) 00:08:56.800 10.338 - 10.388: 98.3526% ( 1) 00:08:56.800 10.388 - 10.437: 98.3789% ( 3) 00:08:56.800 10.437 - 10.486: 98.4052% ( 3) 00:08:56.800 10.486 - 10.535: 98.4227% ( 2) 00:08:56.800 10.535 - 10.585: 98.4315% ( 1) 00:08:56.800 10.634 - 10.683: 98.4402% ( 1) 00:08:56.800 10.683 - 10.732: 98.4490% ( 1) 00:08:56.800 10.782 - 10.831: 98.4665% ( 2) 00:08:56.800 10.880 - 10.929: 98.4928% ( 3) 00:08:56.800 10.929 - 10.978: 98.5016% ( 1) 00:08:56.800 11.077 - 11.126: 98.5103% ( 1) 00:08:56.800 11.175 - 11.225: 98.5454% ( 4) 00:08:56.800 11.914 - 11.963: 98.5542% ( 1) 00:08:56.800 11.963 - 12.012: 98.5629% ( 1) 00:08:56.800 12.160 - 12.209: 98.5717% ( 1) 00:08:56.800 12.308 - 12.357: 98.5804% ( 1) 00:08:56.800 12.357 - 12.406: 98.5892% ( 1) 00:08:56.800 12.702 - 12.800: 98.5980% ( 1) 00:08:56.800 12.997 - 13.095: 98.6067% ( 1) 00:08:56.800 13.095 - 13.194: 98.6155% ( 1) 00:08:56.800 13.194 - 13.292: 98.6330% ( 2) 00:08:56.800 13.292 - 13.391: 98.6593% ( 3) 00:08:56.800 13.391 - 13.489: 98.6768% ( 2) 00:08:56.800 13.489 - 13.588: 98.6944% ( 2) 00:08:56.800 13.588 - 13.686: 98.7557% ( 7) 00:08:56.800 13.686 - 13.785: 98.8083% ( 6) 00:08:56.800 13.785 - 13.883: 98.8258% ( 2) 00:08:56.800 13.883 - 13.982: 98.8871% ( 7) 00:08:56.800 13.982 - 14.080: 98.9309% ( 5) 00:08:56.800 14.080 - 14.178: 99.0186% ( 10) 00:08:56.800 14.178 - 14.277: 99.0536% ( 4) 00:08:56.800 14.277 - 14.375: 99.0799% ( 3) 00:08:56.800 14.375 - 14.474: 99.1413% ( 7) 00:08:56.800 14.474 - 14.572: 99.2114% ( 8) 00:08:56.800 14.572 - 14.671: 99.2552% ( 5) 00:08:56.800 14.671 - 14.769: 99.3253% ( 8) 00:08:56.800 14.769 - 14.868: 99.3954% ( 8) 00:08:56.800 14.868 - 14.966: 99.4567% ( 7) 00:08:56.800 14.966 - 15.065: 99.5268% ( 8) 00:08:56.800 15.065 - 15.163: 99.6057% ( 9) 00:08:56.800 15.163 - 15.262: 99.6495% ( 5) 00:08:56.800 15.262 - 15.360: 99.6670% ( 2) 00:08:56.800 15.360 - 15.458: 99.6845% ( 2) 00:08:56.800 15.458 - 15.557: 99.7108% ( 3) 00:08:56.800 15.754 - 15.852: 99.7196% ( 1) 00:08:56.800 15.852 - 15.951: 99.7284% ( 1) 00:08:56.800 15.951 - 16.049: 99.7371% ( 1) 00:08:56.800 16.049 - 16.148: 99.7546% ( 2) 00:08:56.800 16.148 - 16.246: 99.7634% ( 1) 00:08:56.800 16.246 - 16.345: 99.7722% ( 1) 00:08:56.800 16.345 - 16.443: 99.7809% ( 1) 00:08:56.800 16.443 - 16.542: 99.7985% ( 2) 00:08:56.800 16.640 - 16.738: 99.8335% ( 4) 00:08:56.800 17.428 - 17.526: 99.8423% ( 1) 00:08:56.800 17.625 - 17.723: 99.8510% ( 1) 00:08:56.800 17.822 - 17.920: 99.8598% ( 1) 00:08:56.800 17.920 - 18.018: 99.8686% ( 1) 00:08:56.800 18.117 - 18.215: 99.8773% ( 1) 00:08:56.800 18.511 - 18.609: 99.8861% ( 1) 00:08:56.800 19.298 - 19.397: 99.8948% ( 1) 00:08:56.800 19.495 - 19.594: 99.9036% ( 1) 00:08:56.800 19.692 - 19.791: 99.9124% ( 1) 00:08:56.800 20.185 - 20.283: 99.9211% ( 1) 00:08:56.800 20.382 - 20.480: 99.9299% ( 1) 00:08:56.800 20.874 - 20.972: 99.9387% ( 1) 00:08:56.800 21.366 - 21.465: 99.9474% ( 1) 00:08:56.800 22.646 - 22.745: 99.9562% ( 1) 00:08:56.800 33.477 - 33.674: 99.9649% ( 1) 00:08:56.800 34.462 - 34.658: 99.9737% ( 1) 00:08:56.800 322.954 - 324.529: 99.9825% ( 1) 00:08:56.800 326.105 - 327.680: 99.9912% ( 1) 00:08:56.800 857.009 - 863.311: 100.0000% ( 1) 00:08:56.800 00:08:56.800 ************************************ 00:08:56.800 END TEST nvme_overhead 00:08:56.800 ************************************ 00:08:56.800 00:08:56.800 real 0m1.216s 00:08:56.800 user 0m1.058s 00:08:56.800 sys 0m0.111s 00:08:56.800 10:07:59 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.800 10:07:59 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:56.800 10:07:59 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:56.800 10:07:59 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:56.800 10:07:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.800 10:07:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.800 ************************************ 00:08:56.800 START TEST nvme_arbitration 00:08:56.800 ************************************ 00:08:56.800 10:07:59 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:00.079 Initializing NVMe Controllers 00:09:00.079 Attached to 0000:00:11.0 00:09:00.079 Attached to 0000:00:13.0 00:09:00.079 Attached to 0000:00:10.0 00:09:00.079 Attached to 0000:00:12.0 00:09:00.079 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:09:00.079 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:09:00.079 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:09:00.079 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:00.079 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:00.079 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:00.079 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:00.079 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:00.079 Initialization complete. Launching workers. 00:09:00.079 Starting thread on core 1 with urgent priority queue 00:09:00.079 Starting thread on core 2 with urgent priority queue 00:09:00.079 Starting thread on core 3 with urgent priority queue 00:09:00.079 Starting thread on core 0 with urgent priority queue 00:09:00.079 QEMU NVMe Ctrl (12341 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:09:00.079 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:09:00.079 QEMU NVMe Ctrl (12343 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:09:00.079 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:09:00.079 QEMU NVMe Ctrl (12340 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:09:00.079 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:09:00.079 ======================================================== 00:09:00.079 00:09:00.079 ************************************ 00:09:00.079 END TEST nvme_arbitration 00:09:00.079 ************************************ 00:09:00.079 00:09:00.079 real 0m3.297s 00:09:00.079 user 0m9.233s 00:09:00.079 sys 0m0.119s 00:09:00.079 10:08:02 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.079 10:08:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:00.079 10:08:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:00.079 10:08:02 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.079 10:08:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.079 10:08:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.079 ************************************ 00:09:00.079 START TEST nvme_single_aen 00:09:00.079 ************************************ 00:09:00.079 10:08:02 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:00.079 Asynchronous Event Request test 00:09:00.079 Attached to 0000:00:11.0 00:09:00.079 Attached to 0000:00:13.0 00:09:00.079 Attached to 0000:00:10.0 00:09:00.079 Attached to 0000:00:12.0 00:09:00.079 Reset controller to setup AER completions for this process 00:09:00.079 Registering asynchronous event callbacks... 00:09:00.079 Getting orig temperature thresholds of all controllers 00:09:00.079 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.079 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.079 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.079 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.079 Setting all controllers temperature threshold low to trigger AER 00:09:00.079 Waiting for all controllers temperature threshold to be set lower 00:09:00.079 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.079 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:00.079 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.079 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:00.079 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.079 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:00.079 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.079 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:00.079 Waiting for all controllers to trigger AER and reset threshold 00:09:00.079 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.079 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.079 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.079 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.079 Cleaning up... 00:09:00.079 00:09:00.079 real 0m0.213s 00:09:00.079 user 0m0.064s 00:09:00.079 sys 0m0.100s 00:09:00.079 ************************************ 00:09:00.079 END TEST nvme_single_aen 00:09:00.079 ************************************ 00:09:00.079 10:08:03 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.079 10:08:03 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:00.079 10:08:03 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:00.079 10:08:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.079 10:08:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.079 10:08:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.337 ************************************ 00:09:00.337 START TEST nvme_doorbell_aers 00:09:00.337 ************************************ 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.337 10:08:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.595 [2024-10-17 10:08:03.433362] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:10.561 Executing: test_write_invalid_db 00:09:10.561 Waiting for AER completion... 00:09:10.561 Failure: test_write_invalid_db 00:09:10.561 00:09:10.561 Executing: test_invalid_db_write_overflow_sq 00:09:10.561 Waiting for AER completion... 00:09:10.561 Failure: test_invalid_db_write_overflow_sq 00:09:10.561 00:09:10.561 Executing: test_invalid_db_write_overflow_cq 00:09:10.561 Waiting for AER completion... 00:09:10.561 Failure: test_invalid_db_write_overflow_cq 00:09:10.561 00:09:10.561 10:08:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:10.561 10:08:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:10.561 [2024-10-17 10:08:13.461586] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:20.529 Executing: test_write_invalid_db 00:09:20.529 Waiting for AER completion... 00:09:20.529 Failure: test_write_invalid_db 00:09:20.529 00:09:20.529 Executing: test_invalid_db_write_overflow_sq 00:09:20.529 Waiting for AER completion... 00:09:20.529 Failure: test_invalid_db_write_overflow_sq 00:09:20.529 00:09:20.529 Executing: test_invalid_db_write_overflow_cq 00:09:20.529 Waiting for AER completion... 00:09:20.529 Failure: test_invalid_db_write_overflow_cq 00:09:20.529 00:09:20.529 10:08:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:20.529 10:08:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.529 [2024-10-17 10:08:23.552968] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:30.496 Executing: test_write_invalid_db 00:09:30.496 Waiting for AER completion... 00:09:30.496 Failure: test_write_invalid_db 00:09:30.496 00:09:30.496 Executing: test_invalid_db_write_overflow_sq 00:09:30.496 Waiting for AER completion... 00:09:30.496 Failure: test_invalid_db_write_overflow_sq 00:09:30.496 00:09:30.496 Executing: test_invalid_db_write_overflow_cq 00:09:30.496 Waiting for AER completion... 00:09:30.496 Failure: test_invalid_db_write_overflow_cq 00:09:30.496 00:09:30.496 10:08:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:30.496 10:08:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:30.496 [2024-10-17 10:08:33.539122] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.458 Executing: test_write_invalid_db 00:09:40.458 Waiting for AER completion... 00:09:40.458 Failure: test_write_invalid_db 00:09:40.458 00:09:40.458 Executing: test_invalid_db_write_overflow_sq 00:09:40.458 Waiting for AER completion... 00:09:40.458 Failure: test_invalid_db_write_overflow_sq 00:09:40.458 00:09:40.458 Executing: test_invalid_db_write_overflow_cq 00:09:40.458 Waiting for AER completion... 00:09:40.458 Failure: test_invalid_db_write_overflow_cq 00:09:40.458 00:09:40.458 00:09:40.458 real 0m40.188s 00:09:40.458 user 0m34.097s 00:09:40.458 sys 0m5.642s 00:09:40.458 10:08:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.458 ************************************ 00:09:40.458 10:08:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:40.458 END TEST nvme_doorbell_aers 00:09:40.458 ************************************ 00:09:40.458 10:08:43 nvme -- nvme/nvme.sh@97 -- # uname 00:09:40.458 10:08:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:40.458 10:08:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:40.458 10:08:43 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:40.458 10:08:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.458 10:08:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.458 ************************************ 00:09:40.458 START TEST nvme_multi_aen 00:09:40.458 ************************************ 00:09:40.458 10:08:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:40.717 [2024-10-17 10:08:43.595474] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.595739] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.595820] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.597304] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.597429] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.597492] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.598700] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.598805] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.598865] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.600120] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.600238] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 [2024-10-17 10:08:43.600373] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63366) is not found. Dropping the request. 00:09:40.717 Child process pid: 63887 00:09:40.717 [Child] Asynchronous Event Request test 00:09:40.717 [Child] Attached to 0000:00:11.0 00:09:40.717 [Child] Attached to 0000:00:13.0 00:09:40.717 [Child] Attached to 0000:00:10.0 00:09:40.717 [Child] Attached to 0000:00:12.0 00:09:40.717 [Child] Registering asynchronous event callbacks... 00:09:40.717 [Child] Getting orig temperature thresholds of all controllers 00:09:40.717 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.717 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.717 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.717 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.717 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:40.717 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.717 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.717 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.717 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.717 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.717 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.717 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.717 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.717 [Child] Cleaning up... 00:09:40.976 Asynchronous Event Request test 00:09:40.976 Attached to 0000:00:11.0 00:09:40.976 Attached to 0000:00:13.0 00:09:40.976 Attached to 0000:00:10.0 00:09:40.976 Attached to 0000:00:12.0 00:09:40.976 Reset controller to setup AER completions for this process 00:09:40.976 Registering asynchronous event callbacks... 00:09:40.976 Getting orig temperature thresholds of all controllers 00:09:40.976 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.976 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.976 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.976 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:40.976 Setting all controllers temperature threshold low to trigger AER 00:09:40.976 Waiting for all controllers temperature threshold to be set lower 00:09:40.976 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.976 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:40.976 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.976 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:40.976 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.976 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:40.976 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:40.976 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:40.976 Waiting for all controllers to trigger AER and reset threshold 00:09:40.976 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.976 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.976 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.976 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.976 Cleaning up... 00:09:40.976 00:09:40.976 real 0m0.447s 00:09:40.976 user 0m0.141s 00:09:40.976 sys 0m0.199s 00:09:40.976 10:08:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.976 10:08:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 ************************************ 00:09:40.976 END TEST nvme_multi_aen 00:09:40.976 ************************************ 00:09:40.976 10:08:43 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:40.976 10:08:43 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:40.976 10:08:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.976 10:08:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 ************************************ 00:09:40.976 START TEST nvme_startup 00:09:40.976 ************************************ 00:09:40.976 10:08:43 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:41.234 Initializing NVMe Controllers 00:09:41.234 Attached to 0000:00:11.0 00:09:41.234 Attached to 0000:00:13.0 00:09:41.234 Attached to 0000:00:10.0 00:09:41.234 Attached to 0000:00:12.0 00:09:41.234 Initialization complete. 00:09:41.234 Time used:145687.094 (us). 00:09:41.234 00:09:41.234 real 0m0.215s 00:09:41.234 user 0m0.064s 00:09:41.234 sys 0m0.101s 00:09:41.234 10:08:44 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.234 10:08:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:41.234 ************************************ 00:09:41.234 END TEST nvme_startup 00:09:41.234 ************************************ 00:09:41.234 10:08:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:41.234 10:08:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.234 10:08:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.234 10:08:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.234 ************************************ 00:09:41.234 START TEST nvme_multi_secondary 00:09:41.234 ************************************ 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63943 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63944 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:41.234 10:08:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:44.512 Initializing NVMe Controllers 00:09:44.512 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.512 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:44.512 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:44.512 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:44.512 Initialization complete. Launching workers. 00:09:44.512 ======================================================== 00:09:44.512 Latency(us) 00:09:44.512 Device Information : IOPS MiB/s Average min max 00:09:44.512 PCIE (0000:00:11.0) NSID 1 from core 1: 7513.51 29.35 2129.05 973.80 10175.38 00:09:44.512 PCIE (0000:00:13.0) NSID 1 from core 1: 7513.51 29.35 2129.00 988.96 10193.50 00:09:44.512 PCIE (0000:00:10.0) NSID 1 from core 1: 7513.51 29.35 2127.95 977.13 10217.75 00:09:44.512 PCIE (0000:00:12.0) NSID 1 from core 1: 7513.51 29.35 2128.97 1032.24 9943.65 00:09:44.512 PCIE (0000:00:12.0) NSID 2 from core 1: 7513.51 29.35 2128.98 1009.34 10073.49 00:09:44.512 PCIE (0000:00:12.0) NSID 3 from core 1: 7513.51 29.35 2129.11 969.17 10265.42 00:09:44.512 ======================================================== 00:09:44.512 Total : 45081.09 176.10 2128.84 969.17 10265.42 00:09:44.512 00:09:44.512 Initializing NVMe Controllers 00:09:44.512 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:44.512 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:44.512 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:44.512 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:44.512 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:44.512 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:44.512 Initialization complete. Launching workers. 00:09:44.512 ======================================================== 00:09:44.512 Latency(us) 00:09:44.512 Device Information : IOPS MiB/s Average min max 00:09:44.512 PCIE (0000:00:11.0) NSID 1 from core 2: 3106.30 12.13 5149.99 1196.61 23289.33 00:09:44.512 PCIE (0000:00:13.0) NSID 1 from core 2: 3106.30 12.13 5150.24 1311.42 23265.42 00:09:44.512 PCIE (0000:00:10.0) NSID 1 from core 2: 3106.30 12.13 5148.97 1189.56 20559.58 00:09:44.512 PCIE (0000:00:12.0) NSID 1 from core 2: 3106.30 12.13 5150.29 1338.47 20185.55 00:09:44.512 PCIE (0000:00:12.0) NSID 2 from core 2: 3106.30 12.13 5150.68 1246.43 25167.98 00:09:44.512 PCIE (0000:00:12.0) NSID 3 from core 2: 3106.30 12.13 5155.01 1199.61 20576.92 00:09:44.512 ======================================================== 00:09:44.512 Total : 18637.77 72.80 5150.86 1189.56 25167.98 00:09:44.512 00:09:44.512 10:08:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63943 00:09:46.414 Initializing NVMe Controllers 00:09:46.414 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:46.414 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:46.414 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:46.414 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:46.414 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:46.414 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:46.414 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:46.414 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:46.414 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:46.414 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:46.414 Initialization complete. Launching workers. 00:09:46.414 ======================================================== 00:09:46.414 Latency(us) 00:09:46.414 Device Information : IOPS MiB/s Average min max 00:09:46.414 PCIE (0000:00:11.0) NSID 1 from core 0: 10526.94 41.12 1519.53 692.23 7682.68 00:09:46.414 PCIE (0000:00:13.0) NSID 1 from core 0: 10526.94 41.12 1519.50 680.89 7511.89 00:09:46.414 PCIE (0000:00:10.0) NSID 1 from core 0: 10526.94 41.12 1518.59 674.77 7822.57 00:09:46.414 PCIE (0000:00:12.0) NSID 1 from core 0: 10526.94 41.12 1519.45 685.04 8353.16 00:09:46.414 PCIE (0000:00:12.0) NSID 2 from core 0: 10526.94 41.12 1519.43 610.11 8649.41 00:09:46.414 PCIE (0000:00:12.0) NSID 3 from core 0: 10526.94 41.12 1519.40 596.25 8027.30 00:09:46.414 ======================================================== 00:09:46.414 Total : 63161.65 246.73 1519.32 596.25 8649.41 00:09:46.414 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63944 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64013 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64014 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:46.414 10:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:49.696 Initializing NVMe Controllers 00:09:49.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.696 Initialization complete. Launching workers. 00:09:49.696 ======================================================== 00:09:49.696 Latency(us) 00:09:49.696 Device Information : IOPS MiB/s Average min max 00:09:49.696 PCIE (0000:00:11.0) NSID 1 from core 0: 7484.44 29.24 2137.43 777.07 6915.88 00:09:49.696 PCIE (0000:00:13.0) NSID 1 from core 0: 7484.44 29.24 2137.67 760.69 6545.44 00:09:49.696 PCIE (0000:00:10.0) NSID 1 from core 0: 7484.44 29.24 2136.92 733.44 7293.72 00:09:49.696 PCIE (0000:00:12.0) NSID 1 from core 0: 7484.44 29.24 2137.86 747.69 7313.94 00:09:49.696 PCIE (0000:00:12.0) NSID 2 from core 0: 7484.44 29.24 2138.10 770.09 6911.27 00:09:49.696 PCIE (0000:00:12.0) NSID 3 from core 0: 7484.44 29.24 2138.20 769.75 6867.95 00:09:49.696 ======================================================== 00:09:49.696 Total : 44906.62 175.42 2137.70 733.44 7313.94 00:09:49.696 00:09:49.696 Initializing NVMe Controllers 00:09:49.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:49.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:49.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:49.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:49.696 Initialization complete. Launching workers. 00:09:49.696 ======================================================== 00:09:49.696 Latency(us) 00:09:49.696 Device Information : IOPS MiB/s Average min max 00:09:49.696 PCIE (0000:00:11.0) NSID 1 from core 1: 7273.05 28.41 2199.43 782.79 5926.38 00:09:49.696 PCIE (0000:00:13.0) NSID 1 from core 1: 7273.05 28.41 2199.20 766.41 5937.70 00:09:49.696 PCIE (0000:00:10.0) NSID 1 from core 1: 7273.05 28.41 2197.84 765.57 6284.40 00:09:49.696 PCIE (0000:00:12.0) NSID 1 from core 1: 7273.05 28.41 2198.72 776.56 6243.03 00:09:49.696 PCIE (0000:00:12.0) NSID 2 from core 1: 7273.05 28.41 2198.63 781.36 6233.25 00:09:49.696 PCIE (0000:00:12.0) NSID 3 from core 1: 7273.05 28.41 2198.56 771.53 6321.94 00:09:49.696 ======================================================== 00:09:49.696 Total : 43638.31 170.46 2198.73 765.57 6321.94 00:09:49.696 00:09:51.596 Initializing NVMe Controllers 00:09:51.596 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.596 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.596 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.597 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.597 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:51.597 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:51.597 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:51.597 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:51.597 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:51.597 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:51.597 Initialization complete. Launching workers. 00:09:51.597 ======================================================== 00:09:51.597 Latency(us) 00:09:51.597 Device Information : IOPS MiB/s Average min max 00:09:51.597 PCIE (0000:00:11.0) NSID 1 from core 2: 4278.11 16.71 3737.15 749.91 13580.08 00:09:51.597 PCIE (0000:00:13.0) NSID 1 from core 2: 4278.11 16.71 3736.34 766.67 13450.57 00:09:51.597 PCIE (0000:00:10.0) NSID 1 from core 2: 4278.11 16.71 3734.61 752.49 13909.81 00:09:51.597 PCIE (0000:00:12.0) NSID 1 from core 2: 4278.11 16.71 3736.03 756.59 13671.43 00:09:51.597 PCIE (0000:00:12.0) NSID 2 from core 2: 4278.11 16.71 3735.78 614.07 13300.76 00:09:51.597 PCIE (0000:00:12.0) NSID 3 from core 2: 4278.11 16.71 3736.10 590.01 12943.81 00:09:51.597 ======================================================== 00:09:51.597 Total : 25668.64 100.27 3736.00 590.01 13909.81 00:09:51.597 00:09:51.597 10:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64013 00:09:51.597 10:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64014 00:09:51.597 00:09:51.597 real 0m10.492s 00:09:51.597 user 0m18.357s 00:09:51.597 sys 0m0.643s 00:09:51.597 10:08:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.597 10:08:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:51.597 ************************************ 00:09:51.597 END TEST nvme_multi_secondary 00:09:51.597 ************************************ 00:09:51.597 10:08:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:51.597 10:08:54 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:51.597 10:08:54 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/62970 ]] 00:09:51.597 10:08:54 nvme -- common/autotest_common.sh@1090 -- # kill 62970 00:09:51.597 10:08:54 nvme -- common/autotest_common.sh@1091 -- # wait 62970 00:09:51.597 [2024-10-17 10:08:54.673484] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.674125] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.674189] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.674208] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.676491] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.676546] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.676563] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.676579] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.679532] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.679637] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.679668] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.679697] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.682284] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.682339] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.682357] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.597 [2024-10-17 10:08:54.682375] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63886) is not found. Dropping the request. 00:09:51.858 10:08:54 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:51.858 10:08:54 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:51.858 10:08:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:51.858 10:08:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.859 10:08:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.859 10:08:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.859 ************************************ 00:09:51.859 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:51.859 ************************************ 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:51.859 * Looking for test storage... 00:09:51.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.859 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.118 --rc genhtml_branch_coverage=1 00:09:52.118 --rc genhtml_function_coverage=1 00:09:52.118 --rc genhtml_legend=1 00:09:52.118 --rc geninfo_all_blocks=1 00:09:52.118 --rc geninfo_unexecuted_blocks=1 00:09:52.118 00:09:52.118 ' 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.118 --rc genhtml_branch_coverage=1 00:09:52.118 --rc genhtml_function_coverage=1 00:09:52.118 --rc genhtml_legend=1 00:09:52.118 --rc geninfo_all_blocks=1 00:09:52.118 --rc geninfo_unexecuted_blocks=1 00:09:52.118 00:09:52.118 ' 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.118 --rc genhtml_branch_coverage=1 00:09:52.118 --rc genhtml_function_coverage=1 00:09:52.118 --rc genhtml_legend=1 00:09:52.118 --rc geninfo_all_blocks=1 00:09:52.118 --rc geninfo_unexecuted_blocks=1 00:09:52.118 00:09:52.118 ' 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.118 --rc genhtml_branch_coverage=1 00:09:52.118 --rc genhtml_function_coverage=1 00:09:52.118 --rc genhtml_legend=1 00:09:52.118 --rc geninfo_all_blocks=1 00:09:52.118 --rc geninfo_unexecuted_blocks=1 00:09:52.118 00:09:52.118 ' 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.118 10:08:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:52.118 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:52.118 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.118 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:52.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.118 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:52.118 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64171 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64171 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64171 ']' 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:52.119 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:52.119 [2024-10-17 10:08:55.080880] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:09:52.119 [2024-10-17 10:08:55.080990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64171 ] 00:09:52.378 [2024-10-17 10:08:55.233861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.378 [2024-10-17 10:08:55.338759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.378 [2024-10-17 10:08:55.339240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.378 [2024-10-17 10:08:55.339494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.378 [2024-10-17 10:08:55.339516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.947 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.947 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:52.947 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:52.947 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.947 10:08:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 nvme0n1 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_mmIE7.txt 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 true 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1729159736 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64194 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:52.947 10:08:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 [2024-10-17 10:08:58.031190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:55.480 [2024-10-17 10:08:58.031476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:55.480 [2024-10-17 10:08:58.031509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.480 [2024-10-17 10:08:58.031523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.480 [2024-10-17 10:08:58.033846] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.480 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64194 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64194 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64194 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_mmIE7.txt 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_mmIE7.txt 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64171 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64171 ']' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64171 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64171 00:09:55.480 killing process with pid 64171 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64171' 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64171 00:09:55.480 10:08:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64171 00:09:56.854 10:08:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:56.854 10:08:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:56.854 00:09:56.854 real 0m4.872s 00:09:56.854 user 0m17.311s 00:09:56.854 sys 0m0.510s 00:09:56.854 10:08:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.854 10:08:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.854 ************************************ 00:09:56.854 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:56.854 ************************************ 00:09:56.854 10:08:59 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:56.854 10:08:59 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:56.854 10:08:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:56.854 10:08:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.854 10:08:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.854 ************************************ 00:09:56.854 START TEST nvme_fio 00:09:56.854 ************************************ 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:56.854 10:08:59 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:56.854 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:57.112 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:57.112 10:08:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:57.370 10:09:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:57.370 10:09:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:57.370 10:09:00 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:57.370 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:57.370 fio-3.35 00:09:57.370 Starting 1 thread 00:10:05.477 00:10:05.477 test: (groupid=0, jobs=1): err= 0: pid=64335: Thu Oct 17 10:09:08 2024 00:10:05.477 read: IOPS=23.8k, BW=92.8MiB/s (97.3MB/s)(186MiB/2001msec) 00:10:05.477 slat (nsec): min=3359, max=86910, avg=4986.73, stdev=2117.71 00:10:05.477 clat (usec): min=230, max=7316, avg=2692.03, stdev=701.17 00:10:05.477 lat (usec): min=234, max=7321, avg=2697.02, stdev=702.35 00:10:05.477 clat percentiles (usec): 00:10:05.477 | 1.00th=[ 1696], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:10:05.477 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:05.477 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 4146], 00:10:05.477 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 6587], 00:10:05.477 | 99.99th=[ 6980] 00:10:05.477 bw ( KiB/s): min=91760, max=95256, per=97.89%, avg=93058.67, stdev=1913.43, samples=3 00:10:05.477 iops : min=22940, max=23814, avg=23264.67, stdev=478.36, samples=3 00:10:05.477 write: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(185MiB/2001msec); 0 zone resets 00:10:05.477 slat (nsec): min=3397, max=61984, avg=5213.98, stdev=2060.00 00:10:05.477 clat (usec): min=204, max=7429, avg=2686.30, stdev=684.71 00:10:05.477 lat (usec): min=209, max=7434, avg=2691.52, stdev=685.83 00:10:05.477 clat percentiles (usec): 00:10:05.477 | 1.00th=[ 1680], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:10:05.477 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:05.477 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 4047], 00:10:05.477 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 6587], 00:10:05.477 | 99.99th=[ 7046] 00:10:05.477 bw ( KiB/s): min=90552, max=96952, per=98.60%, avg=93176.00, stdev=3351.91, samples=3 00:10:05.477 iops : min=22638, max=24238, avg=23294.00, stdev=837.98, samples=3 00:10:05.477 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:05.477 lat (msec) : 2=2.70%, 4=91.97%, 10=5.28% 00:10:05.477 cpu : usr=99.05%, sys=0.15%, ctx=4, majf=0, minf=608 00:10:05.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:05.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.477 issued rwts: total=47558,47273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.477 00:10:05.477 Run status group 0 (all jobs): 00:10:05.477 READ: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=186MiB (195MB), run=2001-2001msec 00:10:05.477 WRITE: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:10:05.735 ----------------------------------------------------- 00:10:05.735 Suppressions used: 00:10:05.735 count bytes template 00:10:05.735 1 32 /usr/src/fio/parse.c 00:10:05.735 1 8 libtcmalloc_minimal.so 00:10:05.735 ----------------------------------------------------- 00:10:05.735 00:10:05.735 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:05.735 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:05.735 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:05.735 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:05.992 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:05.992 10:09:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:05.992 10:09:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:05.992 10:09:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:05.992 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.250 10:09:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.250 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.250 fio-3.35 00:10:06.250 Starting 1 thread 00:10:09.555 00:10:09.555 test: (groupid=0, jobs=1): err= 0: pid=64391: Thu Oct 17 10:09:12 2024 00:10:09.555 read: IOPS=15.6k, BW=60.9MiB/s (63.8MB/s)(123MiB/2015msec) 00:10:09.555 slat (nsec): min=3369, max=68346, avg=5295.94, stdev=2958.99 00:10:09.555 clat (usec): min=605, max=37878, avg=2834.15, stdev=1461.35 00:10:09.555 lat (usec): min=609, max=37890, avg=2839.44, stdev=1461.98 00:10:09.555 clat percentiles (usec): 00:10:09.555 | 1.00th=[ 1188], 5.00th=[ 1483], 10.00th=[ 1778], 20.00th=[ 2212], 00:10:09.555 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:09.555 | 70.00th=[ 2769], 80.00th=[ 3326], 90.00th=[ 4228], 95.00th=[ 5145], 00:10:09.555 | 99.00th=[ 6783], 99.50th=[ 7373], 99.90th=[17433], 99.95th=[34341], 00:10:09.555 | 99.99th=[34341] 00:10:09.555 bw ( KiB/s): min=41616, max=94960, per=100.00%, avg=62722.00, stdev=25013.73, samples=4 00:10:09.555 iops : min=10404, max=23740, avg=15680.50, stdev=6253.43, samples=4 00:10:09.555 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(123MiB/2015msec); 0 zone resets 00:10:09.555 slat (usec): min=3, max=112, avg= 5.61, stdev= 2.86 00:10:09.555 clat (usec): min=585, max=46628, avg=5353.41, stdev=5661.71 00:10:09.555 lat (usec): min=589, max=46632, avg=5359.02, stdev=5662.08 00:10:09.555 clat percentiles (usec): 00:10:09.555 | 1.00th=[ 1303], 5.00th=[ 1811], 10.00th=[ 2212], 20.00th=[ 2376], 00:10:09.555 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2966], 00:10:09.555 | 70.00th=[ 4015], 80.00th=[ 7439], 90.00th=[14353], 95.00th=[18220], 00:10:09.555 | 99.00th=[24773], 99.50th=[27395], 99.90th=[42730], 99.95th=[45351], 00:10:09.555 | 99.99th=[46400] 00:10:09.555 bw ( KiB/s): min=41832, max=94952, per=100.00%, avg=62650.00, stdev=24951.77, samples=4 00:10:09.555 iops : min=10458, max=23738, avg=15662.50, stdev=6237.94, samples=4 00:10:09.555 lat (usec) : 750=0.01%, 1000=0.09% 00:10:09.555 lat (msec) : 2=10.58%, 4=68.17%, 10=12.66%, 20=6.74%, 50=1.75% 00:10:09.555 cpu : usr=99.06%, sys=0.15%, ctx=17, majf=0, minf=608 00:10:09.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:09.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.555 issued rwts: total=31393,31420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.555 00:10:09.555 Run status group 0 (all jobs): 00:10:09.555 READ: bw=60.9MiB/s (63.8MB/s), 60.9MiB/s-60.9MiB/s (63.8MB/s-63.8MB/s), io=123MiB (129MB), run=2015-2015msec 00:10:09.555 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=123MiB (129MB), run=2015-2015msec 00:10:09.817 ----------------------------------------------------- 00:10:09.817 Suppressions used: 00:10:09.817 count bytes template 00:10:09.817 1 32 /usr/src/fio/parse.c 00:10:09.817 1 8 libtcmalloc_minimal.so 00:10:09.817 ----------------------------------------------------- 00:10:09.817 00:10:09.817 10:09:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:09.817 10:09:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:09.817 10:09:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:09.817 10:09:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.077 10:09:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.077 10:09:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:10.338 10:09:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.338 10:09:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.338 10:09:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.338 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:10.338 fio-3.35 00:10:10.338 Starting 1 thread 00:10:18.482 00:10:18.482 test: (groupid=0, jobs=1): err= 0: pid=64452: Thu Oct 17 10:09:20 2024 00:10:18.482 read: IOPS=22.0k, BW=85.8MiB/s (89.9MB/s)(172MiB/2001msec) 00:10:18.482 slat (nsec): min=3330, max=50546, avg=5047.55, stdev=2183.94 00:10:18.482 clat (usec): min=219, max=7890, avg=2904.71, stdev=769.13 00:10:18.482 lat (usec): min=223, max=7899, avg=2909.75, stdev=770.38 00:10:18.482 clat percentiles (usec): 00:10:18.482 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2507], 00:10:18.482 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:10:18.482 | 70.00th=[ 2835], 80.00th=[ 2999], 90.00th=[ 3589], 95.00th=[ 4686], 00:10:18.482 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7373], 99.95th=[ 7504], 00:10:18.482 | 99.99th=[ 7701] 00:10:18.482 bw ( KiB/s): min=83368, max=90496, per=99.07%, avg=86994.67, stdev=3565.65, samples=3 00:10:18.482 iops : min=20842, max=22624, avg=21748.67, stdev=891.41, samples=3 00:10:18.482 write: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec); 0 zone resets 00:10:18.482 slat (nsec): min=3515, max=75038, avg=5338.95, stdev=2175.85 00:10:18.482 clat (usec): min=239, max=7913, avg=2921.09, stdev=774.13 00:10:18.482 lat (usec): min=244, max=7934, avg=2926.43, stdev=775.38 00:10:18.482 clat percentiles (usec): 00:10:18.482 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:10:18.482 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2769], 00:10:18.482 | 70.00th=[ 2835], 80.00th=[ 2999], 90.00th=[ 3589], 95.00th=[ 4752], 00:10:18.482 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 7570], 00:10:18.482 | 99.99th=[ 7767] 00:10:18.482 bw ( KiB/s): min=83376, max=90136, per=99.98%, avg=87208.00, stdev=3469.48, samples=3 00:10:18.482 iops : min=20844, max=22534, avg=21802.00, stdev=867.37, samples=3 00:10:18.482 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:18.482 lat (msec) : 2=0.77%, 4=91.50%, 10=7.69% 00:10:18.482 cpu : usr=99.15%, sys=0.15%, ctx=3, majf=0, minf=607 00:10:18.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.482 issued rwts: total=43929,43636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.482 00:10:18.482 Run status group 0 (all jobs): 00:10:18.482 READ: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:18.482 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:10:18.482 ----------------------------------------------------- 00:10:18.482 Suppressions used: 00:10:18.482 count bytes template 00:10:18.482 1 32 /usr/src/fio/parse.c 00:10:18.482 1 8 libtcmalloc_minimal.so 00:10:18.482 ----------------------------------------------------- 00:10:18.482 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:18.482 10:09:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:18.482 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:18.483 10:09:20 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.483 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:18.483 fio-3.35 00:10:18.483 Starting 1 thread 00:10:28.473 00:10:28.473 test: (groupid=0, jobs=1): err= 0: pid=64508: Thu Oct 17 10:09:29 2024 00:10:28.473 read: IOPS=19.6k, BW=76.6MiB/s (80.3MB/s)(153MiB/2001msec) 00:10:28.473 slat (nsec): min=3458, max=67248, avg=5830.68, stdev=2963.44 00:10:28.473 clat (usec): min=227, max=9218, avg=3241.71, stdev=1009.43 00:10:28.473 lat (usec): min=232, max=9248, avg=3247.55, stdev=1011.23 00:10:28.473 clat percentiles (usec): 00:10:28.473 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:10:28.473 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2933], 00:10:28.473 | 70.00th=[ 3130], 80.00th=[ 3851], 90.00th=[ 4752], 95.00th=[ 5538], 00:10:28.473 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7570], 99.95th=[ 7898], 00:10:28.473 | 99.99th=[ 9110] 00:10:28.473 bw ( KiB/s): min=69056, max=79680, per=96.64%, avg=75781.33, stdev=5848.92, samples=3 00:10:28.473 iops : min=17264, max=19920, avg=18945.33, stdev=1462.23, samples=3 00:10:28.473 write: IOPS=19.6k, BW=76.4MiB/s (80.1MB/s)(153MiB/2001msec); 0 zone resets 00:10:28.473 slat (nsec): min=3507, max=89793, avg=6118.58, stdev=3134.61 00:10:28.473 clat (usec): min=310, max=9114, avg=3271.28, stdev=1028.36 00:10:28.473 lat (usec): min=314, max=9122, avg=3277.40, stdev=1030.21 00:10:28.473 clat percentiles (usec): 00:10:28.473 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:28.473 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:10:28.473 | 70.00th=[ 3163], 80.00th=[ 3949], 90.00th=[ 4817], 95.00th=[ 5604], 00:10:28.473 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7570], 99.95th=[ 7898], 00:10:28.473 | 99.99th=[ 8848] 00:10:28.473 bw ( KiB/s): min=69064, max=79672, per=96.87%, avg=75802.67, stdev=5857.24, samples=3 00:10:28.473 iops : min=17266, max=19918, avg=18950.67, stdev=1464.31, samples=3 00:10:28.473 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:10:28.473 lat (msec) : 2=0.24%, 4=80.64%, 10=19.08% 00:10:28.473 cpu : usr=99.10%, sys=0.05%, ctx=5, majf=0, minf=605 00:10:28.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:28.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.473 issued rwts: total=39227,39147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.473 00:10:28.473 Run status group 0 (all jobs): 00:10:28.473 READ: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=153MiB (161MB), run=2001-2001msec 00:10:28.473 WRITE: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=153MiB (160MB), run=2001-2001msec 00:10:28.473 ----------------------------------------------------- 00:10:28.473 Suppressions used: 00:10:28.473 count bytes template 00:10:28.473 1 32 /usr/src/fio/parse.c 00:10:28.473 1 8 libtcmalloc_minimal.so 00:10:28.473 ----------------------------------------------------- 00:10:28.473 00:10:28.473 10:09:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:28.473 10:09:30 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:28.473 00:10:28.473 real 0m30.315s 00:10:28.473 user 0m19.886s 00:10:28.473 sys 0m17.977s 00:10:28.473 10:09:30 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.473 10:09:30 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 ************************************ 00:10:28.473 END TEST nvme_fio 00:10:28.473 ************************************ 00:10:28.473 00:10:28.473 real 1m39.696s 00:10:28.473 user 3m40.793s 00:10:28.473 sys 0m28.629s 00:10:28.473 10:09:30 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.473 10:09:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 ************************************ 00:10:28.473 END TEST nvme 00:10:28.473 ************************************ 00:10:28.473 10:09:30 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:28.473 10:09:30 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.473 10:09:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:28.473 10:09:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.473 10:09:30 -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 ************************************ 00:10:28.473 START TEST nvme_scc 00:10:28.473 ************************************ 00:10:28.473 10:09:30 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.473 * Looking for test storage... 00:10:28.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:28.473 10:09:30 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.473 10:09:30 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.473 10:09:30 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.473 10:09:30 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.473 10:09:30 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.473 10:09:30 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:28.474 10:09:30 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.474 10:09:30 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.474 --rc genhtml_branch_coverage=1 00:10:28.474 --rc genhtml_function_coverage=1 00:10:28.474 --rc genhtml_legend=1 00:10:28.474 --rc geninfo_all_blocks=1 00:10:28.474 --rc geninfo_unexecuted_blocks=1 00:10:28.474 00:10:28.474 ' 00:10:28.474 10:09:30 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.474 --rc genhtml_branch_coverage=1 00:10:28.474 --rc genhtml_function_coverage=1 00:10:28.474 --rc genhtml_legend=1 00:10:28.474 --rc geninfo_all_blocks=1 00:10:28.474 --rc geninfo_unexecuted_blocks=1 00:10:28.474 00:10:28.474 ' 00:10:28.474 10:09:30 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.474 --rc genhtml_branch_coverage=1 00:10:28.474 --rc genhtml_function_coverage=1 00:10:28.474 --rc genhtml_legend=1 00:10:28.474 --rc geninfo_all_blocks=1 00:10:28.474 --rc geninfo_unexecuted_blocks=1 00:10:28.474 00:10:28.474 ' 00:10:28.474 10:09:30 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.474 --rc genhtml_branch_coverage=1 00:10:28.474 --rc genhtml_function_coverage=1 00:10:28.474 --rc genhtml_legend=1 00:10:28.474 --rc geninfo_all_blocks=1 00:10:28.474 --rc geninfo_unexecuted_blocks=1 00:10:28.474 00:10:28.474 ' 00:10:28.474 10:09:30 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.474 10:09:30 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.474 10:09:30 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.474 10:09:30 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.474 10:09:30 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.474 10:09:30 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:28.474 10:09:30 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:28.474 10:09:30 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:28.474 10:09:30 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.474 10:09:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:28.474 10:09:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:28.474 10:09:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:28.474 10:09:30 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:28.474 Waiting for block devices as requested 00:10:28.474 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.474 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.474 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.474 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.768 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:33.768 10:09:36 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:33.768 10:09:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:33.768 10:09:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:33.768 10:09:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:33.768 10:09:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:33.768 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:33.769 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.770 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:33.771 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:33.772 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:33.773 10:09:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:33.773 10:09:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:33.773 10:09:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:33.773 10:09:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:33.773 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.774 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.775 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.776 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.777 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.778 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:33.779 10:09:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:33.779 10:09:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:33.779 10:09:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:33.779 10:09:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:33.779 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.780 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.781 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:33.782 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.783 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.784 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:33.785 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:33.786 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:33.787 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.788 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:33.789 10:09:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:33.789 10:09:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:33.789 10:09:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:33.789 10:09:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.789 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:33.790 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.791 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:33.792 10:09:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:33.792 10:09:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:33.793 10:09:36 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:33.793 10:09:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:33.793 10:09:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:33.793 10:09:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:33.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.368 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.368 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.368 10:09:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:34.368 10:09:37 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:34.368 10:09:37 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.368 10:09:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:34.368 ************************************ 00:10:34.368 START TEST nvme_simple_copy 00:10:34.368 ************************************ 00:10:34.368 10:09:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:34.630 Initializing NVMe Controllers 00:10:34.630 Attaching to 0000:00:10.0 00:10:34.630 Controller supports SCC. Attached to 0000:00:10.0 00:10:34.630 Namespace ID: 1 size: 6GB 00:10:34.630 Initialization complete. 00:10:34.630 00:10:34.630 Controller QEMU NVMe Ctrl (12340 ) 00:10:34.630 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:34.630 Namespace Block Size:4096 00:10:34.630 Writing LBAs 0 to 63 with Random Data 00:10:34.630 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:34.630 LBAs matching Written Data: 64 00:10:34.630 00:10:34.630 real 0m0.255s 00:10:34.630 user 0m0.084s 00:10:34.630 sys 0m0.069s 00:10:34.630 ************************************ 00:10:34.630 END TEST nvme_simple_copy 00:10:34.630 ************************************ 00:10:34.630 10:09:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.630 10:09:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:34.630 ************************************ 00:10:34.630 END TEST nvme_scc 00:10:34.630 ************************************ 00:10:34.630 00:10:34.630 real 0m7.533s 00:10:34.630 user 0m0.990s 00:10:34.630 sys 0m1.386s 00:10:34.630 10:09:37 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.630 10:09:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:34.630 10:09:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:34.630 10:09:37 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:34.630 10:09:37 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:34.630 10:09:37 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:34.630 10:09:37 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:34.630 10:09:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.630 10:09:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.630 10:09:37 -- common/autotest_common.sh@10 -- # set +x 00:10:34.630 ************************************ 00:10:34.630 START TEST nvme_fdp 00:10:34.630 ************************************ 00:10:34.630 10:09:37 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:34.889 * Looking for test storage... 00:10:34.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:34.889 10:09:37 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.889 10:09:37 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.889 10:09:37 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.889 10:09:37 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.889 10:09:37 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:34.889 10:09:37 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.890 10:09:37 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.890 --rc genhtml_branch_coverage=1 00:10:34.890 --rc genhtml_function_coverage=1 00:10:34.890 --rc genhtml_legend=1 00:10:34.890 --rc geninfo_all_blocks=1 00:10:34.890 --rc geninfo_unexecuted_blocks=1 00:10:34.890 00:10:34.890 ' 00:10:34.890 10:09:37 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.890 --rc genhtml_branch_coverage=1 00:10:34.890 --rc genhtml_function_coverage=1 00:10:34.890 --rc genhtml_legend=1 00:10:34.890 --rc geninfo_all_blocks=1 00:10:34.890 --rc geninfo_unexecuted_blocks=1 00:10:34.890 00:10:34.890 ' 00:10:34.890 10:09:37 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.890 --rc genhtml_branch_coverage=1 00:10:34.890 --rc genhtml_function_coverage=1 00:10:34.890 --rc genhtml_legend=1 00:10:34.890 --rc geninfo_all_blocks=1 00:10:34.890 --rc geninfo_unexecuted_blocks=1 00:10:34.890 00:10:34.890 ' 00:10:34.890 10:09:37 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.890 --rc genhtml_branch_coverage=1 00:10:34.890 --rc genhtml_function_coverage=1 00:10:34.890 --rc genhtml_legend=1 00:10:34.890 --rc geninfo_all_blocks=1 00:10:34.890 --rc geninfo_unexecuted_blocks=1 00:10:34.890 00:10:34.890 ' 00:10:34.890 10:09:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.890 10:09:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.890 10:09:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.890 10:09:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.890 10:09:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.890 10:09:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.890 10:09:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.890 10:09:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.890 10:09:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:34.890 10:09:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:34.890 10:09:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:34.890 10:09:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:34.890 10:09:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:35.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.148 Waiting for block devices as requested 00:10:35.405 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.405 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.405 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.665 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:41.019 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:41.019 10:09:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:41.019 10:09:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.019 10:09:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:41.019 10:09:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.019 10:09:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.019 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:41.020 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.021 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.022 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.023 10:09:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:41.024 10:09:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.024 10:09:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:41.024 10:09:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.024 10:09:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.024 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.025 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:41.026 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.027 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:41.028 10:09:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.028 10:09:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:41.028 10:09:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.028 10:09:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:41.028 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.029 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:41.030 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.031 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.032 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:41.033 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.034 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:41.035 10:09:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.035 10:09:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:41.035 10:09:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.035 10:09:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:41.035 10:09:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.036 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:41.037 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:41.038 10:09:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:41.038 10:09:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:41.039 10:09:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:41.039 10:09:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:41.039 10:09:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:41.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.814 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.814 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.814 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.814 10:09:44 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:41.814 10:09:44 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.814 10:09:44 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.814 10:09:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:41.814 ************************************ 00:10:41.814 START TEST nvme_flexible_data_placement 00:10:41.814 ************************************ 00:10:41.814 10:09:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:42.072 Initializing NVMe Controllers 00:10:42.072 Attaching to 0000:00:13.0 00:10:42.072 Controller supports FDP Attached to 0000:00:13.0 00:10:42.072 Namespace ID: 1 Endurance Group ID: 1 00:10:42.072 Initialization complete. 00:10:42.072 00:10:42.072 ================================== 00:10:42.072 == FDP tests for Namespace: #01 == 00:10:42.072 ================================== 00:10:42.072 00:10:42.072 Get Feature: FDP: 00:10:42.072 ================= 00:10:42.072 Enabled: Yes 00:10:42.072 FDP configuration Index: 0 00:10:42.072 00:10:42.072 FDP configurations log page 00:10:42.072 =========================== 00:10:42.072 Number of FDP configurations: 1 00:10:42.072 Version: 0 00:10:42.072 Size: 112 00:10:42.072 FDP Configuration Descriptor: 0 00:10:42.072 Descriptor Size: 96 00:10:42.072 Reclaim Group Identifier format: 2 00:10:42.072 FDP Volatile Write Cache: Not Present 00:10:42.072 FDP Configuration: Valid 00:10:42.072 Vendor Specific Size: 0 00:10:42.072 Number of Reclaim Groups: 2 00:10:42.072 Number of Recalim Unit Handles: 8 00:10:42.072 Max Placement Identifiers: 128 00:10:42.072 Number of Namespaces Suppprted: 256 00:10:42.072 Reclaim unit Nominal Size: 6000000 bytes 00:10:42.072 Estimated Reclaim Unit Time Limit: Not Reported 00:10:42.072 RUH Desc #000: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #001: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #002: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #003: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #004: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #005: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #006: RUH Type: Initially Isolated 00:10:42.072 RUH Desc #007: RUH Type: Initially Isolated 00:10:42.072 00:10:42.072 FDP reclaim unit handle usage log page 00:10:42.072 ====================================== 00:10:42.072 Number of Reclaim Unit Handles: 8 00:10:42.072 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:42.072 RUH Usage Desc #001: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #002: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #003: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #004: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #005: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #006: RUH Attributes: Unused 00:10:42.072 RUH Usage Desc #007: RUH Attributes: Unused 00:10:42.072 00:10:42.072 FDP statistics log page 00:10:42.072 ======================= 00:10:42.072 Host bytes with metadata written: 844062720 00:10:42.072 Media bytes with metadata written: 844210176 00:10:42.072 Media bytes erased: 0 00:10:42.072 00:10:42.072 FDP Reclaim unit handle status 00:10:42.072 ============================== 00:10:42.072 Number of RUHS descriptors: 2 00:10:42.072 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003b0a 00:10:42.072 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:42.072 00:10:42.072 FDP write on placement id: 0 success 00:10:42.072 00:10:42.072 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:42.072 00:10:42.072 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:42.072 00:10:42.072 Get Feature: FDP Events for Placement handle: #0 00:10:42.072 ======================== 00:10:42.072 Number of FDP Events: 6 00:10:42.072 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:42.072 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:42.072 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:42.072 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:42.073 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:42.073 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:42.073 00:10:42.073 FDP events log page 00:10:42.073 =================== 00:10:42.073 Number of FDP events: 1 00:10:42.073 FDP Event #0: 00:10:42.073 Event Type: RU Not Written to Capacity 00:10:42.073 Placement Identifier: Valid 00:10:42.073 NSID: Valid 00:10:42.073 Location: Valid 00:10:42.073 Placement Identifier: 0 00:10:42.073 Event Timestamp: 6 00:10:42.073 Namespace Identifier: 1 00:10:42.073 Reclaim Group Identifier: 0 00:10:42.073 Reclaim Unit Handle Identifier: 0 00:10:42.073 00:10:42.073 FDP test passed 00:10:42.073 00:10:42.073 real 0m0.218s 00:10:42.073 user 0m0.061s 00:10:42.073 sys 0m0.057s 00:10:42.073 10:09:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.073 10:09:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 ************************************ 00:10:42.073 END TEST nvme_flexible_data_placement 00:10:42.073 ************************************ 00:10:42.073 00:10:42.073 real 0m7.320s 00:10:42.073 user 0m0.925s 00:10:42.073 sys 0m1.333s 00:10:42.073 10:09:45 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.073 10:09:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 ************************************ 00:10:42.073 END TEST nvme_fdp 00:10:42.073 ************************************ 00:10:42.073 10:09:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:42.073 10:09:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:42.073 10:09:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.073 10:09:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.073 10:09:45 -- common/autotest_common.sh@10 -- # set +x 00:10:42.073 ************************************ 00:10:42.073 START TEST nvme_rpc 00:10:42.073 ************************************ 00:10:42.073 10:09:45 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:42.073 * Looking for test storage... 00:10:42.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.073 10:09:45 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:42.073 10:09:45 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:42.073 10:09:45 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.389 10:09:45 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:42.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.389 --rc genhtml_branch_coverage=1 00:10:42.389 --rc genhtml_function_coverage=1 00:10:42.389 --rc genhtml_legend=1 00:10:42.389 --rc geninfo_all_blocks=1 00:10:42.389 --rc geninfo_unexecuted_blocks=1 00:10:42.389 00:10:42.389 ' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:42.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.389 --rc genhtml_branch_coverage=1 00:10:42.389 --rc genhtml_function_coverage=1 00:10:42.389 --rc genhtml_legend=1 00:10:42.389 --rc geninfo_all_blocks=1 00:10:42.389 --rc geninfo_unexecuted_blocks=1 00:10:42.389 00:10:42.389 ' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:42.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.389 --rc genhtml_branch_coverage=1 00:10:42.389 --rc genhtml_function_coverage=1 00:10:42.389 --rc genhtml_legend=1 00:10:42.389 --rc geninfo_all_blocks=1 00:10:42.389 --rc geninfo_unexecuted_blocks=1 00:10:42.389 00:10:42.389 ' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:42.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.389 --rc genhtml_branch_coverage=1 00:10:42.389 --rc genhtml_function_coverage=1 00:10:42.389 --rc genhtml_legend=1 00:10:42.389 --rc geninfo_all_blocks=1 00:10:42.389 --rc geninfo_unexecuted_blocks=1 00:10:42.389 00:10:42.389 ' 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65869 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:42.389 10:09:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65869 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 65869 ']' 00:10:42.389 10:09:45 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.390 10:09:45 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.390 10:09:45 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.390 10:09:45 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.390 10:09:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.390 [2024-10-17 10:09:45.309115] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:10:42.390 [2024-10-17 10:09:45.309242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65869 ] 00:10:42.390 [2024-10-17 10:09:45.461234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.648 [2024-10-17 10:09:45.565855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.648 [2024-10-17 10:09:45.565857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.214 10:09:46 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.214 10:09:46 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:43.214 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:43.472 Nvme0n1 00:10:43.472 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:43.472 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:43.730 request: 00:10:43.730 { 00:10:43.730 "bdev_name": "Nvme0n1", 00:10:43.730 "filename": "non_existing_file", 00:10:43.730 "method": "bdev_nvme_apply_firmware", 00:10:43.730 "req_id": 1 00:10:43.730 } 00:10:43.730 Got JSON-RPC error response 00:10:43.730 response: 00:10:43.730 { 00:10:43.730 "code": -32603, 00:10:43.730 "message": "open file failed." 00:10:43.730 } 00:10:43.730 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:43.730 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:43.730 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:43.988 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:43.988 10:09:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65869 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 65869 ']' 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 65869 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65869 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:43.988 killing process with pid 65869 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65869' 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@969 -- # kill 65869 00:10:43.988 10:09:46 nvme_rpc -- common/autotest_common.sh@974 -- # wait 65869 00:10:45.362 00:10:45.362 real 0m3.219s 00:10:45.362 user 0m6.121s 00:10:45.362 sys 0m0.472s 00:10:45.362 10:09:48 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.362 10:09:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.362 ************************************ 00:10:45.362 END TEST nvme_rpc 00:10:45.362 ************************************ 00:10:45.362 10:09:48 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:45.362 10:09:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:45.362 10:09:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.362 10:09:48 -- common/autotest_common.sh@10 -- # set +x 00:10:45.362 ************************************ 00:10:45.362 START TEST nvme_rpc_timeouts 00:10:45.362 ************************************ 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:45.362 * Looking for test storage... 00:10:45.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.362 10:09:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65934 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65934 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65966 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65966 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 65966 ']' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.362 10:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:45.362 10:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:45.622 [2024-10-17 10:09:48.511547] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:10:45.622 [2024-10-17 10:09:48.511670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65966 ] 00:10:45.622 [2024-10-17 10:09:48.659674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.882 [2024-10-17 10:09:48.744678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.882 [2024-10-17 10:09:48.744850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.456 Checking default timeout settings: 00:10:46.456 10:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.456 10:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:46.456 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:46.456 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:46.716 Making settings changes with rpc: 00:10:46.716 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:46.716 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:46.976 Check default vs. modified settings: 00:10:46.976 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:46.976 10:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 Setting action_on_timeout is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 Setting timeout_us is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.237 Setting timeout_admin_us is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65934 /tmp/settings_modified_65934 00:10:47.237 10:09:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65966 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 65966 ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 65966 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65966 00:10:47.237 killing process with pid 65966 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65966' 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 65966 00:10:47.237 10:09:50 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 65966 00:10:48.631 RPC TIMEOUT SETTING TEST PASSED. 00:10:48.631 10:09:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:48.631 00:10:48.631 real 0m3.169s 00:10:48.631 user 0m6.228s 00:10:48.631 sys 0m0.465s 00:10:48.631 10:09:51 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.631 10:09:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:48.631 ************************************ 00:10:48.631 END TEST nvme_rpc_timeouts 00:10:48.631 ************************************ 00:10:48.631 10:09:51 -- spdk/autotest.sh@239 -- # uname -s 00:10:48.631 10:09:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:48.631 10:09:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:48.631 10:09:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.631 10:09:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.631 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:10:48.631 ************************************ 00:10:48.631 START TEST sw_hotplug 00:10:48.631 ************************************ 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:48.631 * Looking for test storage... 00:10:48.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.631 10:09:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:48.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.631 --rc genhtml_branch_coverage=1 00:10:48.631 --rc genhtml_function_coverage=1 00:10:48.631 --rc genhtml_legend=1 00:10:48.631 --rc geninfo_all_blocks=1 00:10:48.631 --rc geninfo_unexecuted_blocks=1 00:10:48.631 00:10:48.631 ' 00:10:48.631 10:09:51 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:48.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.631 --rc genhtml_branch_coverage=1 00:10:48.631 --rc genhtml_function_coverage=1 00:10:48.631 --rc genhtml_legend=1 00:10:48.632 --rc geninfo_all_blocks=1 00:10:48.632 --rc geninfo_unexecuted_blocks=1 00:10:48.632 00:10:48.632 ' 00:10:48.632 10:09:51 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.632 --rc genhtml_branch_coverage=1 00:10:48.632 --rc genhtml_function_coverage=1 00:10:48.632 --rc genhtml_legend=1 00:10:48.632 --rc geninfo_all_blocks=1 00:10:48.632 --rc geninfo_unexecuted_blocks=1 00:10:48.632 00:10:48.632 ' 00:10:48.632 10:09:51 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.632 --rc genhtml_branch_coverage=1 00:10:48.632 --rc genhtml_function_coverage=1 00:10:48.632 --rc genhtml_legend=1 00:10:48.632 --rc geninfo_all_blocks=1 00:10:48.632 --rc geninfo_unexecuted_blocks=1 00:10:48.632 00:10:48.632 ' 00:10:48.632 10:09:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:48.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.153 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.153 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.153 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.153 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.153 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:49.153 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:49.153 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:49.153 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:49.153 10:09:52 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:49.154 10:09:52 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:49.154 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:49.154 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:49.154 10:09:52 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:49.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.676 Waiting for block devices as requested 00:10:49.676 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.677 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.677 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.938 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.209 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:55.209 10:09:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:55.209 10:09:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:55.209 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:55.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:55.209 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:55.492 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:55.749 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.749 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66819 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:55.749 10:09:58 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:55.749 10:09:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:56.007 Initializing NVMe Controllers 00:10:56.007 Attaching to 0000:00:10.0 00:10:56.007 Attaching to 0000:00:11.0 00:10:56.007 Attached to 0000:00:10.0 00:10:56.007 Attached to 0000:00:11.0 00:10:56.007 Initialization complete. Starting I/O... 00:10:56.007 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:56.007 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:56.007 00:10:56.947 QEMU NVMe Ctrl (12340 ): 2582 I/Os completed (+2582) 00:10:56.947 QEMU NVMe Ctrl (12341 ): 2559 I/Os completed (+2559) 00:10:56.947 00:10:58.328 QEMU NVMe Ctrl (12340 ): 5630 I/Os completed (+3048) 00:10:58.328 QEMU NVMe Ctrl (12341 ): 5608 I/Os completed (+3049) 00:10:58.328 00:10:59.261 QEMU NVMe Ctrl (12340 ): 8776 I/Os completed (+3146) 00:10:59.261 QEMU NVMe Ctrl (12341 ): 8709 I/Os completed (+3101) 00:10:59.261 00:11:00.194 QEMU NVMe Ctrl (12340 ): 11835 I/Os completed (+3059) 00:11:00.194 QEMU NVMe Ctrl (12341 ): 11749 I/Os completed (+3040) 00:11:00.194 00:11:01.127 QEMU NVMe Ctrl (12340 ): 15091 I/Os completed (+3256) 00:11:01.127 QEMU NVMe Ctrl (12341 ): 15011 I/Os completed (+3262) 00:11:01.127 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.063 [2024-10-17 10:10:04.838096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:02.063 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:02.063 [2024-10-17 10:10:04.839094] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.839133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.839149] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.839164] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:02.063 [2024-10-17 10:10:04.842201] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.842240] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.842252] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.842264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.063 [2024-10-17 10:10:04.862627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:02.063 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:02.063 [2024-10-17 10:10:04.863612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.863716] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.863750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.863787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:02.063 [2024-10-17 10:10:04.865339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.865464] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.865486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 [2024-10-17 10:10:04.865498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.063 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:02.063 EAL: Scan for (pci) bus failed. 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.063 10:10:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.063 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.063 Attaching to 0000:00:10.0 00:11:02.063 Attached to 0000:00:10.0 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.063 10:10:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.063 Attaching to 0000:00:11.0 00:11:02.063 Attached to 0000:00:11.0 00:11:02.998 QEMU NVMe Ctrl (12340 ): 3487 I/Os completed (+3487) 00:11:02.998 QEMU NVMe Ctrl (12341 ): 3260 I/Os completed (+3260) 00:11:02.998 00:11:04.378 QEMU NVMe Ctrl (12340 ): 6606 I/Os completed (+3119) 00:11:04.378 QEMU NVMe Ctrl (12341 ): 6387 I/Os completed (+3127) 00:11:04.378 00:11:04.944 QEMU NVMe Ctrl (12340 ): 9826 I/Os completed (+3220) 00:11:04.944 QEMU NVMe Ctrl (12341 ): 9562 I/Os completed (+3175) 00:11:04.944 00:11:06.341 QEMU NVMe Ctrl (12340 ): 13045 I/Os completed (+3219) 00:11:06.341 QEMU NVMe Ctrl (12341 ): 12678 I/Os completed (+3116) 00:11:06.341 00:11:07.275 QEMU NVMe Ctrl (12340 ): 16144 I/Os completed (+3099) 00:11:07.275 QEMU NVMe Ctrl (12341 ): 15765 I/Os completed (+3087) 00:11:07.275 00:11:08.208 QEMU NVMe Ctrl (12340 ): 19257 I/Os completed (+3113) 00:11:08.208 QEMU NVMe Ctrl (12341 ): 18889 I/Os completed (+3124) 00:11:08.208 00:11:09.141 QEMU NVMe Ctrl (12340 ): 22882 I/Os completed (+3625) 00:11:09.141 QEMU NVMe Ctrl (12341 ): 22528 I/Os completed (+3639) 00:11:09.141 00:11:10.113 QEMU NVMe Ctrl (12340 ): 26526 I/Os completed (+3644) 00:11:10.113 QEMU NVMe Ctrl (12341 ): 26193 I/Os completed (+3665) 00:11:10.113 00:11:11.081 QEMU NVMe Ctrl (12340 ): 29774 I/Os completed (+3248) 00:11:11.081 QEMU NVMe Ctrl (12341 ): 29524 I/Os completed (+3331) 00:11:11.081 00:11:12.015 QEMU NVMe Ctrl (12340 ): 32800 I/Os completed (+3026) 00:11:12.015 QEMU NVMe Ctrl (12341 ): 32572 I/Os completed (+3048) 00:11:12.015 00:11:12.950 QEMU NVMe Ctrl (12340 ): 36302 I/Os completed (+3502) 00:11:12.950 QEMU NVMe Ctrl (12341 ): 36037 I/Os completed (+3465) 00:11:12.950 00:11:14.323 QEMU NVMe Ctrl (12340 ): 39997 I/Os completed (+3695) 00:11:14.323 QEMU NVMe Ctrl (12341 ): 39728 I/Os completed (+3691) 00:11:14.323 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.323 [2024-10-17 10:10:17.122033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:14.323 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:14.323 [2024-10-17 10:10:17.122990] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.123034] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.123058] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.123073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:14.323 [2024-10-17 10:10:17.124618] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.124655] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.124667] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.124679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.323 [2024-10-17 10:10:17.141958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:14.323 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:14.323 [2024-10-17 10:10:17.143022] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.143131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.143196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.143212] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:14.323 [2024-10-17 10:10:17.144649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.144736] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.144796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 [2024-10-17 10:10:17.144822] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.323 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:14.323 EAL: Scan for (pci) bus failed. 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.323 Attaching to 0000:00:10.0 00:11:14.323 Attached to 0000:00:10.0 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.323 10:10:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.323 Attaching to 0000:00:11.0 00:11:14.323 Attached to 0000:00:11.0 00:11:15.286 QEMU NVMe Ctrl (12340 ): 2562 I/Os completed (+2562) 00:11:15.286 QEMU NVMe Ctrl (12341 ): 2162 I/Os completed (+2162) 00:11:15.286 00:11:16.220 QEMU NVMe Ctrl (12340 ): 5807 I/Os completed (+3245) 00:11:16.220 QEMU NVMe Ctrl (12341 ): 5442 I/Os completed (+3280) 00:11:16.220 00:11:17.154 QEMU NVMe Ctrl (12340 ): 8889 I/Os completed (+3082) 00:11:17.154 QEMU NVMe Ctrl (12341 ): 8569 I/Os completed (+3127) 00:11:17.154 00:11:18.086 QEMU NVMe Ctrl (12340 ): 12013 I/Os completed (+3124) 00:11:18.086 QEMU NVMe Ctrl (12341 ): 11734 I/Os completed (+3165) 00:11:18.086 00:11:19.022 QEMU NVMe Ctrl (12340 ): 15024 I/Os completed (+3011) 00:11:19.022 QEMU NVMe Ctrl (12341 ): 14753 I/Os completed (+3019) 00:11:19.022 00:11:19.956 QEMU NVMe Ctrl (12340 ): 18066 I/Os completed (+3042) 00:11:19.956 QEMU NVMe Ctrl (12341 ): 17800 I/Os completed (+3047) 00:11:19.956 00:11:21.329 QEMU NVMe Ctrl (12340 ): 21494 I/Os completed (+3428) 00:11:21.329 QEMU NVMe Ctrl (12341 ): 21252 I/Os completed (+3452) 00:11:21.329 00:11:22.300 QEMU NVMe Ctrl (12340 ): 25134 I/Os completed (+3640) 00:11:22.300 QEMU NVMe Ctrl (12341 ): 24871 I/Os completed (+3619) 00:11:22.300 00:11:23.233 QEMU NVMe Ctrl (12340 ): 28228 I/Os completed (+3094) 00:11:23.233 QEMU NVMe Ctrl (12341 ): 28040 I/Os completed (+3169) 00:11:23.233 00:11:24.165 QEMU NVMe Ctrl (12340 ): 31276 I/Os completed (+3048) 00:11:24.165 QEMU NVMe Ctrl (12341 ): 31136 I/Os completed (+3096) 00:11:24.165 00:11:25.132 QEMU NVMe Ctrl (12340 ): 34343 I/Os completed (+3067) 00:11:25.132 QEMU NVMe Ctrl (12341 ): 34185 I/Os completed (+3049) 00:11:25.132 00:11:26.067 QEMU NVMe Ctrl (12340 ): 37955 I/Os completed (+3612) 00:11:26.067 QEMU NVMe Ctrl (12341 ): 37785 I/Os completed (+3600) 00:11:26.067 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.325 [2024-10-17 10:10:29.395607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:26.325 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:26.325 [2024-10-17 10:10:29.396607] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.396674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.396700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.396726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:26.325 [2024-10-17 10:10:29.398393] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.398435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.398447] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 [2024-10-17 10:10:29.398458] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.325 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.582 [2024-10-17 10:10:29.417319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:26.582 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:26.582 [2024-10-17 10:10:29.418211] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.418267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.418294] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.418319] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:26.582 [2024-10-17 10:10:29.419675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.419708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.419722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 [2024-10-17 10:10:29.419732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.582 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:26.582 EAL: Scan for (pci) bus failed. 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.582 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.583 Attaching to 0000:00:10.0 00:11:26.583 Attached to 0000:00:10.0 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.583 10:10:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.583 Attaching to 0000:00:11.0 00:11:26.583 Attached to 0000:00:11.0 00:11:26.583 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:26.583 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:26.583 [2024-10-17 10:10:29.652443] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:38.779 10:10:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:38.779 10:10:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.779 10:10:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.81 00:11:38.779 10:10:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.81 00:11:38.779 10:10:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:38.779 10:10:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:11:38.779 10:10:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:11:38.779 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 10:10:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66819 00:11:45.338 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66819) - No such process 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66819 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67369 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:45.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.338 10:10:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67369 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67369 ']' 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.338 10:10:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.338 [2024-10-17 10:10:47.729385] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:11:45.338 [2024-10-17 10:10:47.729509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67369 ] 00:11:45.338 [2024-10-17 10:10:47.878380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.338 [2024-10-17 10:10:47.974235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:45.596 10:10:48 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:45.596 10:10:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.170 10:10:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.170 10:10:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 10:10:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:52.170 10:10:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.170 [2024-10-17 10:10:54.667081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:52.170 [2024-10-17 10:10:54.668400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.170 [2024-10-17 10:10:54.668436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.170 [2024-10-17 10:10:54.668447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.170 [2024-10-17 10:10:54.668467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.170 [2024-10-17 10:10:54.668475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.170 [2024-10-17 10:10:54.668484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.170 [2024-10-17 10:10:54.668491] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.170 [2024-10-17 10:10:54.668499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.170 [2024-10-17 10:10:54.668505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.170 [2024-10-17 10:10:54.668515] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.170 [2024-10-17 10:10:54.668522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.170 [2024-10-17 10:10:54.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.170 10:10:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.170 10:10:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 10:10:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:52.170 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.428 [2024-10-17 10:10:55.367097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:52.428 [2024-10-17 10:10:55.368397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.428 [2024-10-17 10:10:55.368429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.428 [2024-10-17 10:10:55.368441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.428 [2024-10-17 10:10:55.368457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.428 [2024-10-17 10:10:55.368467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.428 [2024-10-17 10:10:55.368474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.428 [2024-10-17 10:10:55.368483] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.428 [2024-10-17 10:10:55.368490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.428 [2024-10-17 10:10:55.368498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.428 [2024-10-17 10:10:55.368505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.428 [2024-10-17 10:10:55.368513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.428 [2024-10-17 10:10:55.368519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.686 10:10:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.686 10:10:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.686 10:10:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.686 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.944 10:10:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.208 10:11:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.208 10:11:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.208 10:11:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.208 10:11:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.208 [2024-10-17 10:11:08.067394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.208 [2024-10-17 10:11:08.068662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.208 [2024-10-17 10:11:08.068696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.208 [2024-10-17 10:11:08.068707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.208 [2024-10-17 10:11:08.068723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.208 [2024-10-17 10:11:08.068731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.208 [2024-10-17 10:11:08.068739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.208 [2024-10-17 10:11:08.068746] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.208 [2024-10-17 10:11:08.068754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.208 [2024-10-17 10:11:08.068761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.208 [2024-10-17 10:11:08.068769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.208 [2024-10-17 10:11:08.068776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.208 [2024-10-17 10:11:08.068784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.208 10:11:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.208 10:11:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.208 10:11:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.208 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.470 [2024-10-17 10:11:08.467404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:05.470 [2024-10-17 10:11:08.468722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.470 [2024-10-17 10:11:08.468755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.470 [2024-10-17 10:11:08.468769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.470 [2024-10-17 10:11:08.468784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.470 [2024-10-17 10:11:08.468793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.470 [2024-10-17 10:11:08.468800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.470 [2024-10-17 10:11:08.468814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.470 [2024-10-17 10:11:08.468821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.470 [2024-10-17 10:11:08.468829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.470 [2024-10-17 10:11:08.468836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.470 [2024-10-17 10:11:08.468844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.470 [2024-10-17 10:11:08.468850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.746 10:11:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.746 10:11:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.746 10:11:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.746 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:06.005 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:06.005 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.005 10:11:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.203 [2024-10-17 10:11:20.967712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:18.203 [2024-10-17 10:11:20.969088] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.203 [2024-10-17 10:11:20.969123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.203 [2024-10-17 10:11:20.969134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.203 [2024-10-17 10:11:20.969151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.203 [2024-10-17 10:11:20.969158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.203 [2024-10-17 10:11:20.969168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.203 [2024-10-17 10:11:20.969176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.203 [2024-10-17 10:11:20.969184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.203 [2024-10-17 10:11:20.969190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.203 [2024-10-17 10:11:20.969198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.203 [2024-10-17 10:11:20.969205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.203 [2024-10-17 10:11:20.969213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.203 10:11:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:18.203 10:11:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:18.463 [2024-10-17 10:11:21.367723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:18.463 [2024-10-17 10:11:21.369137] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.463 [2024-10-17 10:11:21.369171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.463 [2024-10-17 10:11:21.369184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.463 [2024-10-17 10:11:21.369199] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.463 [2024-10-17 10:11:21.369208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.463 [2024-10-17 10:11:21.369215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.463 [2024-10-17 10:11:21.369224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.463 [2024-10-17 10:11:21.369231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.463 [2024-10-17 10:11:21.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.463 [2024-10-17 10:11:21.369248] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.463 [2024-10-17 10:11:21.369256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.463 [2024-10-17 10:11:21.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.463 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.463 10:11:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.463 10:11:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 10:11:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.764 10:11:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.25 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.25 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.25 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.25 2 00:12:30.985 remove_attach_helper took 45.25s to complete (handling 2 nvme drive(s)) 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:30.985 10:11:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:30.985 10:11:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.559 10:11:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.559 10:11:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.559 10:11:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:37.559 10:11:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:37.559 [2024-10-17 10:11:39.950323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:37.559 [2024-10-17 10:11:39.951402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:39.951435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:39.951445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:39.951466] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:39.951474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:39.951482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:39.951490] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:39.951498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:39.951505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:39.951514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:39.951520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:39.951530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:40.350335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:37.559 [2024-10-17 10:11:40.351458] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:40.351495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:40.351507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:40.351523] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:40.351532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:40.351539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:40.351549] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:40.351556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:40.351564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 [2024-10-17 10:11:40.351571] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.559 [2024-10-17 10:11:40.351581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.559 [2024-10-17 10:11:40.351588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.559 10:11:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.559 10:11:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.559 10:11:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.559 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.560 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:37.822 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:37.822 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.822 10:11:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.116 10:11:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.116 10:11:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.116 10:11:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.116 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.117 10:11:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.117 10:11:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.117 10:11:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:50.117 10:11:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.117 [2024-10-17 10:11:52.850745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:50.117 [2024-10-17 10:11:52.853592] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.117 [2024-10-17 10:11:52.853633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.117 [2024-10-17 10:11:52.853644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.117 [2024-10-17 10:11:52.853662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.117 [2024-10-17 10:11:52.853669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.117 [2024-10-17 10:11:52.853677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.117 [2024-10-17 10:11:52.853685] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.117 [2024-10-17 10:11:52.853693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.117 [2024-10-17 10:11:52.853699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.117 [2024-10-17 10:11:52.853708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.117 [2024-10-17 10:11:52.853714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.117 [2024-10-17 10:11:52.853722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.379 [2024-10-17 10:11:53.250767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:50.379 [2024-10-17 10:11:53.251802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.379 [2024-10-17 10:11:53.251833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.379 [2024-10-17 10:11:53.251846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.379 [2024-10-17 10:11:53.251861] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.379 [2024-10-17 10:11:53.251871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.379 [2024-10-17 10:11:53.251879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.379 [2024-10-17 10:11:53.251888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.379 [2024-10-17 10:11:53.251895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.379 [2024-10-17 10:11:53.251903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.379 [2024-10-17 10:11:53.251911] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.379 [2024-10-17 10:11:53.251922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.379 [2024-10-17 10:11:53.251929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.379 10:11:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.379 10:11:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.379 10:11:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.379 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:50.640 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:50.640 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.641 10:11:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.866 10:12:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:02.866 10:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:02.866 [2024-10-17 10:12:05.751182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:02.866 [2024-10-17 10:12:05.752198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.866 [2024-10-17 10:12:05.752236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.866 [2024-10-17 10:12:05.752246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.866 [2024-10-17 10:12:05.752262] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.866 [2024-10-17 10:12:05.752269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.866 [2024-10-17 10:12:05.752277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.866 [2024-10-17 10:12:05.752285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.866 [2024-10-17 10:12:05.752296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.866 [2024-10-17 10:12:05.752303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.866 [2024-10-17 10:12:05.752311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.866 [2024-10-17 10:12:05.752317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.866 [2024-10-17 10:12:05.752324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.126 [2024-10-17 10:12:06.151199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:03.126 [2024-10-17 10:12:06.152479] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.126 [2024-10-17 10:12:06.152512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.126 [2024-10-17 10:12:06.152524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.126 [2024-10-17 10:12:06.152538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.126 [2024-10-17 10:12:06.152547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.126 [2024-10-17 10:12:06.152554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.126 [2024-10-17 10:12:06.152564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.126 [2024-10-17 10:12:06.152570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.126 [2024-10-17 10:12:06.152579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.126 [2024-10-17 10:12:06.152586] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.126 [2024-10-17 10:12:06.152596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.127 [2024-10-17 10:12:06.152603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.386 10:12:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.386 10:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.386 10:12:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.386 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:03.647 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:03.647 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.647 10:12:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.66 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.66 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:13:15.851 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:15.851 10:12:18 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67369 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67369 ']' 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67369 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67369 00:13:15.851 10:12:18 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.851 killing process with pid 67369 00:13:15.852 10:12:18 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.852 10:12:18 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67369' 00:13:15.852 10:12:18 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67369 00:13:15.852 10:12:18 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67369 00:13:16.809 10:12:19 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:17.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:17.325 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:17.325 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:17.584 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.584 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:17.584 00:13:17.584 real 2m29.086s 00:13:17.584 user 1m51.313s 00:13:17.584 sys 0m16.497s 00:13:17.584 10:12:20 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.584 10:12:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.584 ************************************ 00:13:17.584 END TEST sw_hotplug 00:13:17.584 ************************************ 00:13:17.584 10:12:20 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:17.584 10:12:20 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:17.584 10:12:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:17.584 10:12:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.584 10:12:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.584 ************************************ 00:13:17.584 START TEST nvme_xnvme 00:13:17.584 ************************************ 00:13:17.584 10:12:20 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:17.846 * Looking for test storage... 00:13:17.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.846 --rc genhtml_branch_coverage=1 00:13:17.846 --rc genhtml_function_coverage=1 00:13:17.846 --rc genhtml_legend=1 00:13:17.846 --rc geninfo_all_blocks=1 00:13:17.846 --rc geninfo_unexecuted_blocks=1 00:13:17.846 00:13:17.846 ' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.846 --rc genhtml_branch_coverage=1 00:13:17.846 --rc genhtml_function_coverage=1 00:13:17.846 --rc genhtml_legend=1 00:13:17.846 --rc geninfo_all_blocks=1 00:13:17.846 --rc geninfo_unexecuted_blocks=1 00:13:17.846 00:13:17.846 ' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.846 --rc genhtml_branch_coverage=1 00:13:17.846 --rc genhtml_function_coverage=1 00:13:17.846 --rc genhtml_legend=1 00:13:17.846 --rc geninfo_all_blocks=1 00:13:17.846 --rc geninfo_unexecuted_blocks=1 00:13:17.846 00:13:17.846 ' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.846 --rc genhtml_branch_coverage=1 00:13:17.846 --rc genhtml_function_coverage=1 00:13:17.846 --rc genhtml_legend=1 00:13:17.846 --rc geninfo_all_blocks=1 00:13:17.846 --rc geninfo_unexecuted_blocks=1 00:13:17.846 00:13:17.846 ' 00:13:17.846 10:12:20 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.846 10:12:20 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.846 10:12:20 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.846 10:12:20 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.846 10:12:20 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.846 10:12:20 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:17.846 10:12:20 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.846 10:12:20 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.846 10:12:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.846 ************************************ 00:13:17.846 START TEST xnvme_to_malloc_dd_copy 00:13:17.846 ************************************ 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:17.846 10:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:17.846 { 00:13:17.846 "subsystems": [ 00:13:17.846 { 00:13:17.846 "subsystem": "bdev", 00:13:17.846 "config": [ 00:13:17.846 { 00:13:17.846 "params": { 00:13:17.846 "block_size": 512, 00:13:17.846 "num_blocks": 2097152, 00:13:17.846 "name": "malloc0" 00:13:17.846 }, 00:13:17.846 "method": "bdev_malloc_create" 00:13:17.846 }, 00:13:17.846 { 00:13:17.846 "params": { 00:13:17.846 "io_mechanism": "libaio", 00:13:17.846 "filename": "/dev/nullb0", 00:13:17.846 "name": "null0" 00:13:17.846 }, 00:13:17.846 "method": "bdev_xnvme_create" 00:13:17.846 }, 00:13:17.846 { 00:13:17.846 "method": "bdev_wait_for_examine" 00:13:17.846 } 00:13:17.846 ] 00:13:17.847 } 00:13:17.847 ] 00:13:17.847 } 00:13:17.847 [2024-10-17 10:12:20.842113] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:17.847 [2024-10-17 10:12:20.842208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68739 ] 00:13:18.105 [2024-10-17 10:12:20.986580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.105 [2024-10-17 10:12:21.087891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.015  [2024-10-17T10:12:24.046Z] Copying: 229/1024 [MB] (229 MBps) [2024-10-17T10:12:25.430Z] Copying: 458/1024 [MB] (229 MBps) [2024-10-17T10:12:26.000Z] Copying: 742/1024 [MB] (283 MBps) [2024-10-17T10:12:28.568Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:13:25.477 00:13:25.477 10:12:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:25.477 10:12:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:25.477 10:12:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:25.477 10:12:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.477 { 00:13:25.477 "subsystems": [ 00:13:25.477 { 00:13:25.477 "subsystem": "bdev", 00:13:25.477 "config": [ 00:13:25.477 { 00:13:25.477 "params": { 00:13:25.477 "block_size": 512, 00:13:25.477 "num_blocks": 2097152, 00:13:25.477 "name": "malloc0" 00:13:25.477 }, 00:13:25.477 "method": "bdev_malloc_create" 00:13:25.477 }, 00:13:25.477 { 00:13:25.477 "params": { 00:13:25.477 "io_mechanism": "libaio", 00:13:25.477 "filename": "/dev/nullb0", 00:13:25.477 "name": "null0" 00:13:25.477 }, 00:13:25.477 "method": "bdev_xnvme_create" 00:13:25.477 }, 00:13:25.477 { 00:13:25.477 "method": "bdev_wait_for_examine" 00:13:25.477 } 00:13:25.477 ] 00:13:25.477 } 00:13:25.477 ] 00:13:25.477 } 00:13:25.477 [2024-10-17 10:12:28.024900] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:25.477 [2024-10-17 10:12:28.024993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68822 ] 00:13:25.477 [2024-10-17 10:12:28.165192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.477 [2024-10-17 10:12:28.247535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.390  [2024-10-17T10:12:31.053Z] Copying: 299/1024 [MB] (299 MBps) [2024-10-17T10:12:32.432Z] Copying: 598/1024 [MB] (298 MBps) [2024-10-17T10:12:32.432Z] Copying: 898/1024 [MB] (299 MBps) [2024-10-17T10:12:34.337Z] Copying: 1024/1024 [MB] (average 299 MBps) 00:13:31.246 00:13:31.507 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:31.507 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:31.508 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:31.508 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:31.508 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:31.508 10:12:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:31.508 { 00:13:31.508 "subsystems": [ 00:13:31.508 { 00:13:31.508 "subsystem": "bdev", 00:13:31.508 "config": [ 00:13:31.508 { 00:13:31.508 "params": { 00:13:31.508 "block_size": 512, 00:13:31.508 "num_blocks": 2097152, 00:13:31.508 "name": "malloc0" 00:13:31.508 }, 00:13:31.508 "method": "bdev_malloc_create" 00:13:31.508 }, 00:13:31.508 { 00:13:31.508 "params": { 00:13:31.508 "io_mechanism": "io_uring", 00:13:31.508 "filename": "/dev/nullb0", 00:13:31.508 "name": "null0" 00:13:31.508 }, 00:13:31.508 "method": "bdev_xnvme_create" 00:13:31.508 }, 00:13:31.508 { 00:13:31.508 "method": "bdev_wait_for_examine" 00:13:31.508 } 00:13:31.508 ] 00:13:31.508 } 00:13:31.508 ] 00:13:31.508 } 00:13:31.508 [2024-10-17 10:12:34.427139] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:31.508 [2024-10-17 10:12:34.427247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:13:31.508 [2024-10-17 10:12:34.575660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.768 [2024-10-17 10:12:34.657092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.677  [2024-10-17T10:12:37.701Z] Copying: 247/1024 [MB] (247 MBps) [2024-10-17T10:12:38.633Z] Copying: 482/1024 [MB] (234 MBps) [2024-10-17T10:12:39.573Z] Copying: 718/1024 [MB] (235 MBps) [2024-10-17T10:12:39.573Z] Copying: 986/1024 [MB] (268 MBps) [2024-10-17T10:12:41.487Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:13:38.396 00:13:38.396 10:12:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:38.396 10:12:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:38.396 10:12:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:38.396 10:12:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:38.396 { 00:13:38.396 "subsystems": [ 00:13:38.396 { 00:13:38.396 "subsystem": "bdev", 00:13:38.396 "config": [ 00:13:38.396 { 00:13:38.396 "params": { 00:13:38.396 "block_size": 512, 00:13:38.396 "num_blocks": 2097152, 00:13:38.396 "name": "malloc0" 00:13:38.396 }, 00:13:38.396 "method": "bdev_malloc_create" 00:13:38.396 }, 00:13:38.396 { 00:13:38.396 "params": { 00:13:38.396 "io_mechanism": "io_uring", 00:13:38.396 "filename": "/dev/nullb0", 00:13:38.396 "name": "null0" 00:13:38.396 }, 00:13:38.396 "method": "bdev_xnvme_create" 00:13:38.396 }, 00:13:38.396 { 00:13:38.396 "method": "bdev_wait_for_examine" 00:13:38.396 } 00:13:38.396 ] 00:13:38.396 } 00:13:38.396 ] 00:13:38.396 } 00:13:38.656 [2024-10-17 10:12:41.503336] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:38.656 [2024-10-17 10:12:41.503454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68987 ] 00:13:38.656 [2024-10-17 10:12:41.651745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.656 [2024-10-17 10:12:41.735077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.569  [2024-10-17T10:12:44.680Z] Copying: 306/1024 [MB] (306 MBps) [2024-10-17T10:12:45.618Z] Copying: 618/1024 [MB] (312 MBps) [2024-10-17T10:12:45.878Z] Copying: 930/1024 [MB] (311 MBps) [2024-10-17T10:12:47.856Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:13:44.765 00:13:44.765 10:12:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:44.765 10:12:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:44.765 00:13:44.765 real 0m27.012s 00:13:44.765 user 0m23.891s 00:13:44.765 sys 0m2.598s 00:13:44.765 10:12:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.765 10:12:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:44.765 ************************************ 00:13:44.765 END TEST xnvme_to_malloc_dd_copy 00:13:44.765 ************************************ 00:13:45.025 10:12:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:45.025 10:12:47 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:45.025 10:12:47 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.025 10:12:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.025 ************************************ 00:13:45.025 START TEST xnvme_bdevperf 00:13:45.025 ************************************ 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.025 10:12:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.025 { 00:13:45.025 "subsystems": [ 00:13:45.025 { 00:13:45.025 "subsystem": "bdev", 00:13:45.025 "config": [ 00:13:45.025 { 00:13:45.025 "params": { 00:13:45.025 "io_mechanism": "libaio", 00:13:45.025 "filename": "/dev/nullb0", 00:13:45.025 "name": "null0" 00:13:45.025 }, 00:13:45.025 "method": "bdev_xnvme_create" 00:13:45.025 }, 00:13:45.025 { 00:13:45.025 "method": "bdev_wait_for_examine" 00:13:45.025 } 00:13:45.025 ] 00:13:45.025 } 00:13:45.025 ] 00:13:45.025 } 00:13:45.025 [2024-10-17 10:12:47.905888] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:45.025 [2024-10-17 10:12:47.906016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69092 ] 00:13:45.025 [2024-10-17 10:12:48.056590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.286 [2024-10-17 10:12:48.155903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.546 Running I/O for 5 seconds... 00:13:47.454 152768.00 IOPS, 596.75 MiB/s [2024-10-17T10:12:51.534Z] 156448.00 IOPS, 611.12 MiB/s [2024-10-17T10:12:52.477Z] 162858.67 IOPS, 636.17 MiB/s [2024-10-17T10:12:53.419Z] 159648.00 IOPS, 623.62 MiB/s 00:13:50.328 Latency(us) 00:13:50.328 [2024-10-17T10:12:53.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.328 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.328 null0 : 5.00 158214.84 618.03 0.00 0.00 401.54 145.72 2079.51 00:13:50.328 [2024-10-17T10:12:53.419Z] =================================================================================================================== 00:13:50.328 [2024-10-17T10:12:53.419Z] Total : 158214.84 618.03 0.00 0.00 401.54 145.72 2079.51 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:51.269 10:12:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.269 { 00:13:51.269 "subsystems": [ 00:13:51.269 { 00:13:51.269 "subsystem": "bdev", 00:13:51.269 "config": [ 00:13:51.269 { 00:13:51.269 "params": { 00:13:51.269 "io_mechanism": "io_uring", 00:13:51.269 "filename": "/dev/nullb0", 00:13:51.269 "name": "null0" 00:13:51.269 }, 00:13:51.269 "method": "bdev_xnvme_create" 00:13:51.269 }, 00:13:51.269 { 00:13:51.269 "method": "bdev_wait_for_examine" 00:13:51.269 } 00:13:51.269 ] 00:13:51.269 } 00:13:51.269 ] 00:13:51.269 } 00:13:51.269 [2024-10-17 10:12:54.218351] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:51.269 [2024-10-17 10:12:54.218472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69166 ] 00:13:51.530 [2024-10-17 10:12:54.370192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.530 [2024-10-17 10:12:54.470926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.791 Running I/O for 5 seconds... 00:13:53.677 177088.00 IOPS, 691.75 MiB/s [2024-10-17T10:12:57.709Z] 180384.00 IOPS, 704.62 MiB/s [2024-10-17T10:12:59.116Z] 196330.67 IOPS, 766.92 MiB/s [2024-10-17T10:13:00.058Z] 203888.00 IOPS, 796.44 MiB/s 00:13:56.967 Latency(us) 00:13:56.967 [2024-10-17T10:13:00.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.967 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:56.967 null0 : 5.00 208954.16 816.23 0.00 0.00 303.85 154.39 1991.29 00:13:56.967 [2024-10-17T10:13:00.058Z] =================================================================================================================== 00:13:56.967 [2024-10-17T10:13:00.058Z] Total : 208954.16 816.23 0.00 0.00 303.85 154.39 1991.29 00:13:57.226 10:13:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:57.226 10:13:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:57.487 00:13:57.487 real 0m12.496s 00:13:57.487 user 0m9.963s 00:13:57.487 sys 0m2.279s 00:13:57.487 10:13:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.487 10:13:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.487 ************************************ 00:13:57.487 END TEST xnvme_bdevperf 00:13:57.487 ************************************ 00:13:57.487 00:13:57.487 real 0m39.719s 00:13:57.487 user 0m33.961s 00:13:57.487 sys 0m4.987s 00:13:57.487 10:13:00 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.487 10:13:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.487 ************************************ 00:13:57.487 END TEST nvme_xnvme 00:13:57.487 ************************************ 00:13:57.487 10:13:00 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:57.487 10:13:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:57.487 10:13:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.487 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:13:57.487 ************************************ 00:13:57.487 START TEST blockdev_xnvme 00:13:57.487 ************************************ 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:57.487 * Looking for test storage... 00:13:57.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.487 10:13:00 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.487 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:57.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.487 --rc genhtml_branch_coverage=1 00:13:57.487 --rc genhtml_function_coverage=1 00:13:57.487 --rc genhtml_legend=1 00:13:57.487 --rc geninfo_all_blocks=1 00:13:57.487 --rc geninfo_unexecuted_blocks=1 00:13:57.488 00:13:57.488 ' 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.488 --rc genhtml_branch_coverage=1 00:13:57.488 --rc genhtml_function_coverage=1 00:13:57.488 --rc genhtml_legend=1 00:13:57.488 --rc geninfo_all_blocks=1 00:13:57.488 --rc geninfo_unexecuted_blocks=1 00:13:57.488 00:13:57.488 ' 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.488 --rc genhtml_branch_coverage=1 00:13:57.488 --rc genhtml_function_coverage=1 00:13:57.488 --rc genhtml_legend=1 00:13:57.488 --rc geninfo_all_blocks=1 00:13:57.488 --rc geninfo_unexecuted_blocks=1 00:13:57.488 00:13:57.488 ' 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:57.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.488 --rc genhtml_branch_coverage=1 00:13:57.488 --rc genhtml_function_coverage=1 00:13:57.488 --rc genhtml_legend=1 00:13:57.488 --rc geninfo_all_blocks=1 00:13:57.488 --rc geninfo_unexecuted_blocks=1 00:13:57.488 00:13:57.488 ' 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69308 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69308 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69308 ']' 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.488 10:13:00 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.488 10:13:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.750 [2024-10-17 10:13:00.609257] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:13:57.750 [2024-10-17 10:13:00.609377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69308 ] 00:13:57.750 [2024-10-17 10:13:00.749447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.750 [2024-10-17 10:13:00.837573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.322 10:13:01 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.322 10:13:01 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:58.322 10:13:01 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:58.322 10:13:01 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:58.322 10:13:01 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:58.322 10:13:01 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:58.322 10:13:01 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:58.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.893 Waiting for block devices as requested 00:13:58.893 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.893 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.893 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:59.154 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:04.443 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.443 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.443 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:04.444 nvme0n1 00:14:04.444 nvme1n1 00:14:04.444 nvme2n1 00:14:04.444 nvme2n2 00:14:04.444 nvme2n3 00:14:04.444 nvme3n1 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ecf84e22-ee3e-40ab-a6a9-ae70cfcfa179"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ecf84e22-ee3e-40ab-a6a9-ae70cfcfa179",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "26c2e3fd-5a29-441b-9c39-6041c7631689"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "26c2e3fd-5a29-441b-9c39-6041c7631689",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bf84d3e9-bf8c-4b96-b52c-a80ec314e690"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf84d3e9-bf8c-4b96-b52c-a80ec314e690",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1692021f-4ba3-4773-8fb9-9a1cffe3fa17"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1692021f-4ba3-4773-8fb9-9a1cffe3fa17",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "07fdc98c-75e0-40a3-99de-e96281706c39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07fdc98c-75e0-40a3-99de-e96281706c39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b0875760-2cba-4bd3-bb34-a721359c9050"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b0875760-2cba-4bd3-bb34-a721359c9050",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:04.444 10:13:07 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69308 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69308 ']' 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69308 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69308 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.444 killing process with pid 69308 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69308' 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69308 00:14:04.444 10:13:07 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69308 00:14:05.824 10:13:08 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:05.824 10:13:08 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:05.824 10:13:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:05.824 10:13:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.824 10:13:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.824 ************************************ 00:14:05.824 START TEST bdev_hello_world 00:14:05.824 ************************************ 00:14:05.824 10:13:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:05.824 [2024-10-17 10:13:08.565821] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:05.824 [2024-10-17 10:13:08.565942] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69662 ] 00:14:05.824 [2024-10-17 10:13:08.718119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.825 [2024-10-17 10:13:08.819201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.113 [2024-10-17 10:13:09.149834] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:06.113 [2024-10-17 10:13:09.149873] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:06.113 [2024-10-17 10:13:09.149888] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:06.113 [2024-10-17 10:13:09.151720] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:06.113 [2024-10-17 10:13:09.151963] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:06.113 [2024-10-17 10:13:09.151981] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:06.113 [2024-10-17 10:13:09.152150] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:06.113 00:14:06.113 [2024-10-17 10:13:09.152167] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:07.057 00:14:07.057 real 0m1.339s 00:14:07.057 user 0m1.077s 00:14:07.057 sys 0m0.150s 00:14:07.057 10:13:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.057 10:13:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:07.057 ************************************ 00:14:07.057 END TEST bdev_hello_world 00:14:07.057 ************************************ 00:14:07.057 10:13:09 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:07.057 10:13:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:07.057 10:13:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.057 10:13:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.057 ************************************ 00:14:07.057 START TEST bdev_bounds 00:14:07.057 ************************************ 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69693 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:07.057 Process bdevio pid: 69693 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69693' 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69693 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69693 ']' 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.057 10:13:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:07.057 [2024-10-17 10:13:09.948796] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:07.057 [2024-10-17 10:13:09.948914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69693 ] 00:14:07.057 [2024-10-17 10:13:10.092346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.319 [2024-10-17 10:13:10.195298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.319 [2024-10-17 10:13:10.196119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.319 [2024-10-17 10:13:10.196129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.891 10:13:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.891 10:13:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:07.891 10:13:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:07.891 I/O targets: 00:14:07.891 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:07.891 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:07.891 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:07.891 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:07.891 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:07.891 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:07.891 00:14:07.891 00:14:07.891 CUnit - A unit testing framework for C - Version 2.1-3 00:14:07.891 http://cunit.sourceforge.net/ 00:14:07.891 00:14:07.891 00:14:07.891 Suite: bdevio tests on: nvme3n1 00:14:07.891 Test: blockdev write read block ...passed 00:14:07.891 Test: blockdev write zeroes read block ...passed 00:14:07.891 Test: blockdev write zeroes read no split ...passed 00:14:07.891 Test: blockdev write zeroes read split ...passed 00:14:07.891 Test: blockdev write zeroes read split partial ...passed 00:14:07.891 Test: blockdev reset ...passed 00:14:07.891 Test: blockdev write read 8 blocks ...passed 00:14:07.891 Test: blockdev write read size > 128k ...passed 00:14:07.891 Test: blockdev write read invalid size ...passed 00:14:07.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:07.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:07.891 Test: blockdev write read max offset ...passed 00:14:07.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:07.891 Test: blockdev writev readv 8 blocks ...passed 00:14:07.891 Test: blockdev writev readv 30 x 1block ...passed 00:14:07.891 Test: blockdev writev readv block ...passed 00:14:07.891 Test: blockdev writev readv size > 128k ...passed 00:14:07.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:07.891 Test: blockdev comparev and writev ...passed 00:14:07.891 Test: blockdev nvme passthru rw ...passed 00:14:07.891 Test: blockdev nvme passthru vendor specific ...passed 00:14:07.891 Test: blockdev nvme admin passthru ...passed 00:14:07.891 Test: blockdev copy ...passed 00:14:07.891 Suite: bdevio tests on: nvme2n3 00:14:07.891 Test: blockdev write read block ...passed 00:14:07.891 Test: blockdev write zeroes read block ...passed 00:14:07.891 Test: blockdev write zeroes read no split ...passed 00:14:07.891 Test: blockdev write zeroes read split ...passed 00:14:08.152 Test: blockdev write zeroes read split partial ...passed 00:14:08.152 Test: blockdev reset ...passed 00:14:08.152 Test: blockdev write read 8 blocks ...passed 00:14:08.152 Test: blockdev write read size > 128k ...passed 00:14:08.152 Test: blockdev write read invalid size ...passed 00:14:08.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.152 Test: blockdev write read max offset ...passed 00:14:08.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.152 Test: blockdev writev readv 8 blocks ...passed 00:14:08.152 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.152 Test: blockdev writev readv block ...passed 00:14:08.153 Test: blockdev writev readv size > 128k ...passed 00:14:08.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.153 Test: blockdev comparev and writev ...passed 00:14:08.153 Test: blockdev nvme passthru rw ...passed 00:14:08.153 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.153 Test: blockdev nvme admin passthru ...passed 00:14:08.153 Test: blockdev copy ...passed 00:14:08.153 Suite: bdevio tests on: nvme2n2 00:14:08.153 Test: blockdev write read block ...passed 00:14:08.153 Test: blockdev write zeroes read block ...passed 00:14:08.153 Test: blockdev write zeroes read no split ...passed 00:14:08.153 Test: blockdev write zeroes read split ...passed 00:14:08.153 Test: blockdev write zeroes read split partial ...passed 00:14:08.153 Test: blockdev reset ...passed 00:14:08.153 Test: blockdev write read 8 blocks ...passed 00:14:08.153 Test: blockdev write read size > 128k ...passed 00:14:08.153 Test: blockdev write read invalid size ...passed 00:14:08.153 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.153 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.153 Test: blockdev write read max offset ...passed 00:14:08.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.153 Test: blockdev writev readv 8 blocks ...passed 00:14:08.153 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.153 Test: blockdev writev readv block ...passed 00:14:08.153 Test: blockdev writev readv size > 128k ...passed 00:14:08.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.153 Test: blockdev comparev and writev ...passed 00:14:08.153 Test: blockdev nvme passthru rw ...passed 00:14:08.153 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.153 Test: blockdev nvme admin passthru ...passed 00:14:08.153 Test: blockdev copy ...passed 00:14:08.153 Suite: bdevio tests on: nvme2n1 00:14:08.153 Test: blockdev write read block ...passed 00:14:08.153 Test: blockdev write zeroes read block ...passed 00:14:08.153 Test: blockdev write zeroes read no split ...passed 00:14:08.153 Test: blockdev write zeroes read split ...passed 00:14:08.153 Test: blockdev write zeroes read split partial ...passed 00:14:08.153 Test: blockdev reset ...passed 00:14:08.153 Test: blockdev write read 8 blocks ...passed 00:14:08.153 Test: blockdev write read size > 128k ...passed 00:14:08.153 Test: blockdev write read invalid size ...passed 00:14:08.153 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.153 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.153 Test: blockdev write read max offset ...passed 00:14:08.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.153 Test: blockdev writev readv 8 blocks ...passed 00:14:08.153 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.153 Test: blockdev writev readv block ...passed 00:14:08.153 Test: blockdev writev readv size > 128k ...passed 00:14:08.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.153 Test: blockdev comparev and writev ...passed 00:14:08.153 Test: blockdev nvme passthru rw ...passed 00:14:08.153 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.153 Test: blockdev nvme admin passthru ...passed 00:14:08.153 Test: blockdev copy ...passed 00:14:08.153 Suite: bdevio tests on: nvme1n1 00:14:08.153 Test: blockdev write read block ...passed 00:14:08.153 Test: blockdev write zeroes read block ...passed 00:14:08.153 Test: blockdev write zeroes read no split ...passed 00:14:08.153 Test: blockdev write zeroes read split ...passed 00:14:08.153 Test: blockdev write zeroes read split partial ...passed 00:14:08.153 Test: blockdev reset ...passed 00:14:08.153 Test: blockdev write read 8 blocks ...passed 00:14:08.153 Test: blockdev write read size > 128k ...passed 00:14:08.153 Test: blockdev write read invalid size ...passed 00:14:08.153 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.153 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.153 Test: blockdev write read max offset ...passed 00:14:08.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.153 Test: blockdev writev readv 8 blocks ...passed 00:14:08.153 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.153 Test: blockdev writev readv block ...passed 00:14:08.153 Test: blockdev writev readv size > 128k ...passed 00:14:08.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.153 Test: blockdev comparev and writev ...passed 00:14:08.153 Test: blockdev nvme passthru rw ...passed 00:14:08.153 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.153 Test: blockdev nvme admin passthru ...passed 00:14:08.153 Test: blockdev copy ...passed 00:14:08.153 Suite: bdevio tests on: nvme0n1 00:14:08.153 Test: blockdev write read block ...passed 00:14:08.153 Test: blockdev write zeroes read block ...passed 00:14:08.153 Test: blockdev write zeroes read no split ...passed 00:14:08.153 Test: blockdev write zeroes read split ...passed 00:14:08.153 Test: blockdev write zeroes read split partial ...passed 00:14:08.153 Test: blockdev reset ...passed 00:14:08.153 Test: blockdev write read 8 blocks ...passed 00:14:08.153 Test: blockdev write read size > 128k ...passed 00:14:08.153 Test: blockdev write read invalid size ...passed 00:14:08.153 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.153 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.153 Test: blockdev write read max offset ...passed 00:14:08.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.153 Test: blockdev writev readv 8 blocks ...passed 00:14:08.153 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.153 Test: blockdev writev readv block ...passed 00:14:08.153 Test: blockdev writev readv size > 128k ...passed 00:14:08.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.153 Test: blockdev comparev and writev ...passed 00:14:08.153 Test: blockdev nvme passthru rw ...passed 00:14:08.153 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.153 Test: blockdev nvme admin passthru ...passed 00:14:08.153 Test: blockdev copy ...passed 00:14:08.153 00:14:08.153 Run Summary: Type Total Ran Passed Failed Inactive 00:14:08.153 suites 6 6 n/a 0 0 00:14:08.153 tests 138 138 138 0 0 00:14:08.153 asserts 780 780 780 0 n/a 00:14:08.153 00:14:08.153 Elapsed time = 0.849 seconds 00:14:08.153 0 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69693 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69693 ']' 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69693 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.153 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69693 00:14:08.414 killing process with pid 69693 00:14:08.414 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.414 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.414 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69693' 00:14:08.414 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69693 00:14:08.414 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69693 00:14:08.986 10:13:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:08.986 00:14:08.986 real 0m2.086s 00:14:08.986 user 0m5.295s 00:14:08.986 sys 0m0.272s 00:14:08.986 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.986 ************************************ 00:14:08.986 END TEST bdev_bounds 00:14:08.986 ************************************ 00:14:08.986 10:13:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:08.986 10:13:12 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:08.986 10:13:12 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:08.986 10:13:12 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.986 10:13:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.986 ************************************ 00:14:08.986 START TEST bdev_nbd 00:14:08.986 ************************************ 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69749 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69749 /var/tmp/spdk-nbd.sock 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69749 ']' 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.986 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:09.247 [2024-10-17 10:13:12.079086] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:09.247 [2024-10-17 10:13:12.079382] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.247 [2024-10-17 10:13:12.228563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.247 [2024-10-17 10:13:12.329769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.191 10:13:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.191 1+0 records in 00:14:10.191 1+0 records out 00:14:10.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335966 s, 12.2 MB/s 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.191 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:10.452 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.453 1+0 records in 00:14:10.453 1+0 records out 00:14:10.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374426 s, 10.9 MB/s 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.453 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.712 1+0 records in 00:14:10.712 1+0 records out 00:14:10.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386321 s, 10.6 MB/s 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.712 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.971 1+0 records in 00:14:10.971 1+0 records out 00:14:10.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571866 s, 7.2 MB/s 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.971 10:13:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:11.229 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.230 1+0 records in 00:14:11.230 1+0 records out 00:14:11.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443682 s, 9.2 MB/s 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:11.230 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:11.487 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.488 1+0 records in 00:14:11.488 1+0 records out 00:14:11.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062681 s, 6.5 MB/s 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:11.488 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd0", 00:14:11.746 "bdev_name": "nvme0n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd1", 00:14:11.746 "bdev_name": "nvme1n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd2", 00:14:11.746 "bdev_name": "nvme2n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd3", 00:14:11.746 "bdev_name": "nvme2n2" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd4", 00:14:11.746 "bdev_name": "nvme2n3" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd5", 00:14:11.746 "bdev_name": "nvme3n1" 00:14:11.746 } 00:14:11.746 ]' 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd0", 00:14:11.746 "bdev_name": "nvme0n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd1", 00:14:11.746 "bdev_name": "nvme1n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd2", 00:14:11.746 "bdev_name": "nvme2n1" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd3", 00:14:11.746 "bdev_name": "nvme2n2" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd4", 00:14:11.746 "bdev_name": "nvme2n3" 00:14:11.746 }, 00:14:11.746 { 00:14:11.746 "nbd_device": "/dev/nbd5", 00:14:11.746 "bdev_name": "nvme3n1" 00:14:11.746 } 00:14:11.746 ]' 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.746 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.004 10:13:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.004 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.265 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:12.526 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.527 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.788 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.051 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:13.051 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.051 10:13:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.051 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:13.309 /dev/nbd0 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.309 1+0 records in 00:14:13.309 1+0 records out 00:14:13.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530792 s, 7.7 MB/s 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.309 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:13.567 /dev/nbd1 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.567 1+0 records in 00:14:13.567 1+0 records out 00:14:13.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631633 s, 6.5 MB/s 00:14:13.567 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.568 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:13.828 /dev/nbd10 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:13.828 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.829 1+0 records in 00:14:13.829 1+0 records out 00:14:13.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392428 s, 10.4 MB/s 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.829 10:13:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:14.088 /dev/nbd11 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.088 1+0 records in 00:14:14.088 1+0 records out 00:14:14.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421017 s, 9.7 MB/s 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.088 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:14.353 /dev/nbd12 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.353 1+0 records in 00:14:14.353 1+0 records out 00:14:14.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341516 s, 12.0 MB/s 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.353 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:14.615 /dev/nbd13 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.615 1+0 records in 00:14:14.615 1+0 records out 00:14:14.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641814 s, 6.4 MB/s 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd0", 00:14:14.615 "bdev_name": "nvme0n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd1", 00:14:14.615 "bdev_name": "nvme1n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd10", 00:14:14.615 "bdev_name": "nvme2n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd11", 00:14:14.615 "bdev_name": "nvme2n2" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd12", 00:14:14.615 "bdev_name": "nvme2n3" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd13", 00:14:14.615 "bdev_name": "nvme3n1" 00:14:14.615 } 00:14:14.615 ]' 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.615 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd0", 00:14:14.615 "bdev_name": "nvme0n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd1", 00:14:14.615 "bdev_name": "nvme1n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd10", 00:14:14.615 "bdev_name": "nvme2n1" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd11", 00:14:14.615 "bdev_name": "nvme2n2" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd12", 00:14:14.615 "bdev_name": "nvme2n3" 00:14:14.615 }, 00:14:14.615 { 00:14:14.615 "nbd_device": "/dev/nbd13", 00:14:14.615 "bdev_name": "nvme3n1" 00:14:14.615 } 00:14:14.615 ]' 00:14:14.875 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:14.875 /dev/nbd1 00:14:14.875 /dev/nbd10 00:14:14.875 /dev/nbd11 00:14:14.875 /dev/nbd12 00:14:14.875 /dev/nbd13' 00:14:14.875 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:14.875 /dev/nbd1 00:14:14.875 /dev/nbd10 00:14:14.875 /dev/nbd11 00:14:14.875 /dev/nbd12 00:14:14.875 /dev/nbd13' 00:14:14.875 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:14.876 256+0 records in 00:14:14.876 256+0 records out 00:14:14.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00959365 s, 109 MB/s 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:14.876 256+0 records in 00:14:14.876 256+0 records out 00:14:14.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662568 s, 15.8 MB/s 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:14.876 256+0 records in 00:14:14.876 256+0 records out 00:14:14.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0824946 s, 12.7 MB/s 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.876 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:15.136 256+0 records in 00:14:15.136 256+0 records out 00:14:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639153 s, 16.4 MB/s 00:14:15.136 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.136 10:13:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:15.136 256+0 records in 00:14:15.136 256+0 records out 00:14:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0612322 s, 17.1 MB/s 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:15.136 256+0 records in 00:14:15.136 256+0 records out 00:14:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745107 s, 14.1 MB/s 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:15.136 256+0 records in 00:14:15.136 256+0 records out 00:14:15.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0634043 s, 16.5 MB/s 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:15.136 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.137 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.398 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.660 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.922 10:13:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.185 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:16.445 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:16.445 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:16.445 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:16.445 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.445 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:16.446 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:16.706 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:16.967 malloc_lvol_verify 00:14:16.967 10:13:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:17.227 7da6d318-ffe4-4204-8a6f-5fcd364fdcee 00:14:17.227 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:17.227 245d2eed-5ff9-4cd7-8616-b1d1658d0f18 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:17.488 /dev/nbd0 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:17.488 mke2fs 1.47.0 (5-Feb-2023) 00:14:17.488 Discarding device blocks: 0/4096 done 00:14:17.488 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:17.488 00:14:17.488 Allocating group tables: 0/1 done 00:14:17.488 Writing inode tables: 0/1 done 00:14:17.488 Creating journal (1024 blocks): done 00:14:17.488 Writing superblocks and filesystem accounting information: 0/1 done 00:14:17.488 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.488 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69749 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69749 ']' 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69749 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69749 00:14:17.750 killing process with pid 69749 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69749' 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69749 00:14:17.750 10:13:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69749 00:14:18.693 ************************************ 00:14:18.693 END TEST bdev_nbd 00:14:18.693 ************************************ 00:14:18.693 10:13:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:18.693 00:14:18.693 real 0m9.527s 00:14:18.693 user 0m13.601s 00:14:18.693 sys 0m3.116s 00:14:18.693 10:13:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.693 10:13:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:18.693 10:13:21 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:18.693 10:13:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:18.693 10:13:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:18.693 10:13:21 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:18.693 10:13:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.693 10:13:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.693 10:13:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.693 ************************************ 00:14:18.693 START TEST bdev_fio 00:14:18.693 ************************************ 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:18.693 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:18.693 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:18.694 ************************************ 00:14:18.694 START TEST bdev_fio_rw_verify 00:14:18.694 ************************************ 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.694 10:13:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.955 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.955 fio-3.35 00:14:18.955 Starting 6 threads 00:14:31.193 00:14:31.193 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70150: Thu Oct 17 10:13:32 2024 00:14:31.193 read: IOPS=21.3k, BW=83.3MiB/s (87.3MB/s)(833MiB/10003msec) 00:14:31.193 slat (usec): min=2, max=1945, avg= 5.30, stdev=10.21 00:14:31.193 clat (usec): min=76, max=54469, avg=786.01, stdev=621.69 00:14:31.193 lat (usec): min=80, max=54474, avg=791.31, stdev=622.37 00:14:31.193 clat percentiles (usec): 00:14:31.193 | 50.000th=[ 594], 99.000th=[ 2900], 99.900th=[ 4293], 99.990th=[ 5407], 00:14:31.193 | 99.999th=[15401] 00:14:31.193 write: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(845MiB/10003msec); 0 zone resets 00:14:31.193 slat (usec): min=10, max=3164, avg=33.19, stdev=100.72 00:14:31.193 clat (usec): min=59, max=222684, avg=1176.48, stdev=2837.95 00:14:31.193 lat (usec): min=73, max=222703, avg=1209.67, stdev=2841.26 00:14:31.193 clat percentiles (usec): 00:14:31.193 | 50.000th=[ 848], 99.000th=[ 4948], 99.900th=[ 8848], 00:14:31.193 | 99.990th=[206570], 99.999th=[223347] 00:14:31.193 bw ( KiB/s): min=37759, max=150193, per=100.00%, avg=88197.53, stdev=5839.05, samples=114 00:14:31.193 iops : min= 9438, max=37547, avg=22048.32, stdev=1459.73, samples=114 00:14:31.193 lat (usec) : 100=0.08%, 250=7.22%, 500=21.68%, 750=23.36%, 1000=14.90% 00:14:31.193 lat (msec) : 2=23.99%, 4=7.66%, 10=1.08%, 20=0.02%, 50=0.01% 00:14:31.193 lat (msec) : 100=0.01%, 250=0.01% 00:14:31.193 cpu : usr=45.84%, sys=31.27%, ctx=6637, majf=0, minf=19408 00:14:31.193 IO depths : 1=10.9%, 2=23.1%, 4=51.3%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.193 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.193 issued rwts: total=213232,216321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.193 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:31.193 00:14:31.193 Run status group 0 (all jobs): 00:14:31.193 READ: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=833MiB (873MB), run=10003-10003msec 00:14:31.193 WRITE: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=845MiB (886MB), run=10003-10003msec 00:14:31.193 ----------------------------------------------------- 00:14:31.193 Suppressions used: 00:14:31.193 count bytes template 00:14:31.193 6 48 /usr/src/fio/parse.c 00:14:31.193 2938 282048 /usr/src/fio/iolog.c 00:14:31.193 1 8 libtcmalloc_minimal.so 00:14:31.193 1 904 libcrypto.so 00:14:31.193 ----------------------------------------------------- 00:14:31.193 00:14:31.193 00:14:31.193 real 0m11.856s 00:14:31.193 user 0m28.953s 00:14:31.193 sys 0m19.036s 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.193 ************************************ 00:14:31.193 END TEST bdev_fio_rw_verify 00:14:31.193 ************************************ 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:31.193 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ecf84e22-ee3e-40ab-a6a9-ae70cfcfa179"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ecf84e22-ee3e-40ab-a6a9-ae70cfcfa179",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "26c2e3fd-5a29-441b-9c39-6041c7631689"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "26c2e3fd-5a29-441b-9c39-6041c7631689",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bf84d3e9-bf8c-4b96-b52c-a80ec314e690"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf84d3e9-bf8c-4b96-b52c-a80ec314e690",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1692021f-4ba3-4773-8fb9-9a1cffe3fa17"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1692021f-4ba3-4773-8fb9-9a1cffe3fa17",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "07fdc98c-75e0-40a3-99de-e96281706c39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07fdc98c-75e0-40a3-99de-e96281706c39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b0875760-2cba-4bd3-bb34-a721359c9050"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b0875760-2cba-4bd3-bb34-a721359c9050",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.194 /home/vagrant/spdk_repo/spdk 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:31.194 00:14:31.194 real 0m12.019s 00:14:31.194 user 0m29.029s 00:14:31.194 sys 0m19.102s 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.194 10:13:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:31.194 ************************************ 00:14:31.194 END TEST bdev_fio 00:14:31.194 ************************************ 00:14:31.194 10:13:33 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:31.194 10:13:33 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:31.194 10:13:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:31.194 10:13:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.194 10:13:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.194 ************************************ 00:14:31.194 START TEST bdev_verify 00:14:31.194 ************************************ 00:14:31.194 10:13:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:31.194 [2024-10-17 10:13:33.734616] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:31.194 [2024-10-17 10:13:33.734752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70320 ] 00:14:31.194 [2024-10-17 10:13:33.885447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.194 [2024-10-17 10:13:34.004541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.194 [2024-10-17 10:13:34.004550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.478 Running I/O for 5 seconds... 00:14:33.807 21206.00 IOPS, 82.84 MiB/s [2024-10-17T10:13:37.843Z] 22283.00 IOPS, 87.04 MiB/s [2024-10-17T10:13:38.784Z] 22736.67 IOPS, 88.82 MiB/s [2024-10-17T10:13:39.720Z] 22612.00 IOPS, 88.33 MiB/s [2024-10-17T10:13:39.720Z] 22784.20 IOPS, 89.00 MiB/s 00:14:36.629 Latency(us) 00:14:36.629 [2024-10-17T10:13:39.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.629 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0xa0000 00:14:36.629 nvme0n1 : 5.04 1776.44 6.94 0.00 0.00 71915.36 10637.00 79046.50 00:14:36.629 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0xa0000 length 0xa0000 00:14:36.629 nvme0n1 : 5.01 1660.36 6.49 0.00 0.00 76955.84 9729.58 96388.33 00:14:36.629 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0xbd0bd 00:14:36.629 nvme1n1 : 5.06 2300.10 8.98 0.00 0.00 55355.89 2961.72 58881.58 00:14:36.629 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:36.629 nvme1n1 : 5.05 2254.23 8.81 0.00 0.00 56540.49 7208.96 62914.56 00:14:36.629 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0x80000 00:14:36.629 nvme2n1 : 5.06 1873.07 7.32 0.00 0.00 67812.89 7360.20 64124.46 00:14:36.629 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x80000 length 0x80000 00:14:36.629 nvme2n1 : 5.04 1801.48 7.04 0.00 0.00 70498.99 7914.73 77030.01 00:14:36.629 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0x80000 00:14:36.629 nvme2n2 : 5.07 1818.97 7.11 0.00 0.00 69677.86 6326.74 68964.04 00:14:36.629 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x80000 length 0x80000 00:14:36.629 nvme2n2 : 5.07 1793.97 7.01 0.00 0.00 70639.17 11645.24 70173.93 00:14:36.629 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0x80000 00:14:36.629 nvme2n3 : 5.07 1818.16 7.10 0.00 0.00 69562.05 7511.43 75820.11 00:14:36.629 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x80000 length 0x80000 00:14:36.629 nvme2n3 : 5.07 1792.21 7.00 0.00 0.00 70569.88 3112.96 66544.25 00:14:36.629 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x0 length 0x20000 00:14:36.629 nvme3n1 : 5.07 1817.42 7.10 0.00 0.00 69448.80 8519.68 68157.44 00:14:36.629 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.629 Verification LBA range: start 0x20000 length 0x20000 00:14:36.629 nvme3n1 : 5.08 1815.90 7.09 0.00 0.00 69498.32 4763.96 66544.25 00:14:36.629 [2024-10-17T10:13:39.720Z] =================================================================================================================== 00:14:36.629 [2024-10-17T10:13:39.720Z] Total : 22522.31 87.98 0.00 0.00 67623.05 2961.72 96388.33 00:14:37.196 00:14:37.196 real 0m6.598s 00:14:37.196 user 0m10.862s 00:14:37.196 sys 0m1.311s 00:14:37.196 ************************************ 00:14:37.196 END TEST bdev_verify 00:14:37.196 ************************************ 00:14:37.196 10:13:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.196 10:13:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:37.456 10:13:40 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.456 10:13:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:37.456 10:13:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.456 10:13:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.456 ************************************ 00:14:37.456 START TEST bdev_verify_big_io 00:14:37.456 ************************************ 00:14:37.456 10:13:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.456 [2024-10-17 10:13:40.392721] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:37.456 [2024-10-17 10:13:40.392837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ] 00:14:37.456 [2024-10-17 10:13:40.543636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.714 [2024-10-17 10:13:40.651083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.714 [2024-10-17 10:13:40.651113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.281 Running I/O for 5 seconds... 00:14:44.111 1344.00 IOPS, 84.00 MiB/s [2024-10-17T10:13:47.462Z] 2724.00 IOPS, 170.25 MiB/s [2024-10-17T10:13:47.462Z] 2851.33 IOPS, 178.21 MiB/s 00:14:44.371 Latency(us) 00:14:44.371 [2024-10-17T10:13:47.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.371 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0xa000 00:14:44.371 nvme0n1 : 6.01 106.41 6.65 0.00 0.00 1138369.14 256497.82 1155046.79 00:14:44.371 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0xa000 length 0xa000 00:14:44.371 nvme0n1 : 6.15 98.83 6.18 0.00 0.00 1240449.55 96388.33 1884210.41 00:14:44.371 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0xbd0b 00:14:44.371 nvme1n1 : 6.16 105.35 6.58 0.00 0.00 1127411.37 9074.22 2529487.95 00:14:44.371 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:44.371 nvme1n1 : 6.17 138.16 8.64 0.00 0.00 852446.24 108890.58 1219574.55 00:14:44.371 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0x8000 00:14:44.371 nvme2n1 : 6.13 130.47 8.15 0.00 0.00 875456.10 129862.10 1025991.29 00:14:44.371 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x8000 length 0x8000 00:14:44.371 nvme2n1 : 6.16 101.25 6.33 0.00 0.00 1151854.98 28835.84 1987454.82 00:14:44.371 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0x8000 00:14:44.371 nvme2n2 : 6.15 120.99 7.56 0.00 0.00 925521.73 129862.10 1393799.48 00:14:44.371 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x8000 length 0x8000 00:14:44.371 nvme2n2 : 6.17 121.90 7.62 0.00 0.00 923295.18 10435.35 1387346.71 00:14:44.371 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0x8000 00:14:44.371 nvme2n3 : 6.15 98.82 6.18 0.00 0.00 1106835.63 16938.54 2516582.40 00:14:44.371 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x8000 length 0x8000 00:14:44.371 nvme2n3 : 6.17 132.31 8.27 0.00 0.00 822295.38 7158.55 1477685.56 00:14:44.371 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x0 length 0x2000 00:14:44.371 nvme3n1 : 6.16 175.36 10.96 0.00 0.00 602287.32 7410.61 1142141.24 00:14:44.371 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.371 Verification LBA range: start 0x2000 length 0x2000 00:14:44.371 nvme3n1 : 6.17 132.20 8.26 0.00 0.00 793688.84 10132.87 1806777.11 00:14:44.371 [2024-10-17T10:13:47.462Z] =================================================================================================================== 00:14:44.371 [2024-10-17T10:13:47.462Z] Total : 1462.03 91.38 0.00 0.00 932778.00 7158.55 2529487.95 00:14:45.305 00:14:45.305 real 0m7.890s 00:14:45.305 user 0m14.550s 00:14:45.305 sys 0m0.408s 00:14:45.305 10:13:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.305 ************************************ 00:14:45.305 END TEST bdev_verify_big_io 00:14:45.305 ************************************ 00:14:45.305 10:13:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 10:13:48 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.305 10:13:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:45.305 10:13:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.305 10:13:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 ************************************ 00:14:45.305 START TEST bdev_write_zeroes 00:14:45.305 ************************************ 00:14:45.305 10:13:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.305 [2024-10-17 10:13:48.350256] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:45.305 [2024-10-17 10:13:48.350372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:14:45.563 [2024-10-17 10:13:48.499416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.563 [2024-10-17 10:13:48.602026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.129 Running I/O for 1 seconds... 00:14:47.064 86217.00 IOPS, 336.79 MiB/s 00:14:47.064 Latency(us) 00:14:47.064 [2024-10-17T10:13:50.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.064 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme0n1 : 1.02 14080.58 55.00 0.00 0.00 9081.64 5242.88 20568.22 00:14:47.064 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme1n1 : 1.02 15391.80 60.12 0.00 0.00 8300.80 3629.69 19660.80 00:14:47.064 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme2n1 : 1.02 13923.15 54.39 0.00 0.00 9167.91 5570.56 18955.03 00:14:47.064 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme2n2 : 1.02 14032.42 54.81 0.00 0.00 9040.93 5570.56 19358.33 00:14:47.064 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme2n3 : 1.02 13988.50 54.64 0.00 0.00 9049.58 5570.56 19156.68 00:14:47.064 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:47.064 nvme3n1 : 1.03 13972.53 54.58 0.00 0.00 9041.55 5494.94 20568.22 00:14:47.064 [2024-10-17T10:13:50.155Z] =================================================================================================================== 00:14:47.064 [2024-10-17T10:13:50.155Z] Total : 85388.98 333.55 0.00 0.00 8936.72 3629.69 20568.22 00:14:47.998 00:14:47.998 real 0m2.516s 00:14:47.998 user 0m1.918s 00:14:47.998 sys 0m0.397s 00:14:47.998 ************************************ 00:14:47.998 END TEST bdev_write_zeroes 00:14:47.998 ************************************ 00:14:47.998 10:13:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.998 10:13:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:47.998 10:13:50 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:47.998 10:13:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:47.998 10:13:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.998 10:13:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.998 ************************************ 00:14:47.998 START TEST bdev_json_nonenclosed 00:14:47.998 ************************************ 00:14:47.998 10:13:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:47.998 [2024-10-17 10:13:50.936469] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:47.998 [2024-10-17 10:13:50.936587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70573 ] 00:14:47.998 [2024-10-17 10:13:51.087201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.256 [2024-10-17 10:13:51.183093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.256 [2024-10-17 10:13:51.183174] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:48.256 [2024-10-17 10:13:51.183190] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:48.256 [2024-10-17 10:13:51.183199] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:48.514 00:14:48.514 real 0m0.489s 00:14:48.514 user 0m0.288s 00:14:48.514 sys 0m0.097s 00:14:48.514 10:13:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.514 ************************************ 00:14:48.514 END TEST bdev_json_nonenclosed 00:14:48.514 ************************************ 00:14:48.514 10:13:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:48.514 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:48.514 10:13:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:48.514 10:13:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.514 10:13:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.514 ************************************ 00:14:48.514 START TEST bdev_json_nonarray 00:14:48.514 ************************************ 00:14:48.514 10:13:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:48.514 [2024-10-17 10:13:51.488864] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:14:48.514 [2024-10-17 10:13:51.488987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70599 ] 00:14:48.772 [2024-10-17 10:13:51.636765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.772 [2024-10-17 10:13:51.737765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.772 [2024-10-17 10:13:51.737845] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:48.772 [2024-10-17 10:13:51.737862] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:48.772 [2024-10-17 10:13:51.737871] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:49.030 00:14:49.030 real 0m0.489s 00:14:49.030 user 0m0.294s 00:14:49.030 sys 0m0.087s 00:14:49.030 10:13:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.030 ************************************ 00:14:49.030 END TEST bdev_json_nonarray 00:14:49.030 ************************************ 00:14:49.030 10:13:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:49.030 10:13:51 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:49.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:11.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:11.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:11.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.096 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.096 00:15:18.096 real 1m20.446s 00:15:18.096 user 1m26.749s 00:15:18.096 sys 1m31.969s 00:15:18.096 10:14:20 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.096 10:14:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 ************************************ 00:15:18.097 END TEST blockdev_xnvme 00:15:18.097 ************************************ 00:15:18.097 10:14:20 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:18.097 10:14:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:18.097 10:14:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.097 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 ************************************ 00:15:18.097 START TEST ublk 00:15:18.097 ************************************ 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:18.097 * Looking for test storage... 00:15:18.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.097 10:14:20 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.097 10:14:20 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.097 10:14:20 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.097 10:14:20 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.097 10:14:20 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.097 10:14:20 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:18.097 10:14:20 ublk -- scripts/common.sh@345 -- # : 1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.097 10:14:20 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.097 10:14:20 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@353 -- # local d=1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.097 10:14:20 ublk -- scripts/common.sh@355 -- # echo 1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.097 10:14:20 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@353 -- # local d=2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.097 10:14:20 ublk -- scripts/common.sh@355 -- # echo 2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.097 10:14:20 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.097 10:14:20 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.097 10:14:20 ublk -- scripts/common.sh@368 -- # return 0 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.097 --rc genhtml_branch_coverage=1 00:15:18.097 --rc genhtml_function_coverage=1 00:15:18.097 --rc genhtml_legend=1 00:15:18.097 --rc geninfo_all_blocks=1 00:15:18.097 --rc geninfo_unexecuted_blocks=1 00:15:18.097 00:15:18.097 ' 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.097 --rc genhtml_branch_coverage=1 00:15:18.097 --rc genhtml_function_coverage=1 00:15:18.097 --rc genhtml_legend=1 00:15:18.097 --rc geninfo_all_blocks=1 00:15:18.097 --rc geninfo_unexecuted_blocks=1 00:15:18.097 00:15:18.097 ' 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.097 --rc genhtml_branch_coverage=1 00:15:18.097 --rc genhtml_function_coverage=1 00:15:18.097 --rc genhtml_legend=1 00:15:18.097 --rc geninfo_all_blocks=1 00:15:18.097 --rc geninfo_unexecuted_blocks=1 00:15:18.097 00:15:18.097 ' 00:15:18.097 10:14:20 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.097 --rc genhtml_branch_coverage=1 00:15:18.097 --rc genhtml_function_coverage=1 00:15:18.097 --rc genhtml_legend=1 00:15:18.097 --rc geninfo_all_blocks=1 00:15:18.097 --rc geninfo_unexecuted_blocks=1 00:15:18.097 00:15:18.097 ' 00:15:18.097 10:14:20 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:18.097 10:14:20 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:18.097 10:14:20 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:18.097 10:14:20 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:18.097 10:14:20 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:18.097 10:14:20 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:18.097 10:14:20 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:18.097 10:14:20 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:18.097 10:14:20 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:18.097 10:14:20 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:18.097 10:14:20 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:18.097 10:14:21 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:18.097 10:14:21 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:18.097 10:14:21 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.097 10:14:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 ************************************ 00:15:18.097 START TEST test_save_ublk_config 00:15:18.097 ************************************ 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70908 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70908 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70908 ']' 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.097 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 [2024-10-17 10:14:21.086708] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:15:18.097 [2024-10-17 10:14:21.086808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70908 ] 00:15:18.354 [2024-10-17 10:14:21.229386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.354 [2024-10-17 10:14:21.328629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.919 10:14:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:18.919 [2024-10-17 10:14:21.938076] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:18.919 [2024-10-17 10:14:21.938861] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:18.919 malloc0 00:15:18.919 [2024-10-17 10:14:22.002197] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:18.919 [2024-10-17 10:14:22.002277] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:18.919 [2024-10-17 10:14:22.002287] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:18.919 [2024-10-17 10:14:22.002294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:19.178 [2024-10-17 10:14:22.011142] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:19.178 [2024-10-17 10:14:22.011162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:19.178 [2024-10-17 10:14:22.018083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:19.178 [2024-10-17 10:14:22.018175] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:19.178 [2024-10-17 10:14:22.035080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:19.178 0 00:15:19.178 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.178 10:14:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:19.178 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.178 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:19.436 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.436 10:14:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:19.436 "subsystems": [ 00:15:19.436 { 00:15:19.436 "subsystem": "fsdev", 00:15:19.436 "config": [ 00:15:19.436 { 00:15:19.436 "method": "fsdev_set_opts", 00:15:19.436 "params": { 00:15:19.436 "fsdev_io_pool_size": 65535, 00:15:19.436 "fsdev_io_cache_size": 256 00:15:19.436 } 00:15:19.436 } 00:15:19.436 ] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "keyring", 00:15:19.436 "config": [] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "iobuf", 00:15:19.436 "config": [ 00:15:19.436 { 00:15:19.436 "method": "iobuf_set_options", 00:15:19.436 "params": { 00:15:19.436 "small_pool_count": 8192, 00:15:19.436 "large_pool_count": 1024, 00:15:19.436 "small_bufsize": 8192, 00:15:19.436 "large_bufsize": 135168 00:15:19.436 } 00:15:19.436 } 00:15:19.436 ] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "sock", 00:15:19.436 "config": [ 00:15:19.436 { 00:15:19.436 "method": "sock_set_default_impl", 00:15:19.436 "params": { 00:15:19.436 "impl_name": "posix" 00:15:19.436 } 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "method": "sock_impl_set_options", 00:15:19.436 "params": { 00:15:19.436 "impl_name": "ssl", 00:15:19.436 "recv_buf_size": 4096, 00:15:19.436 "send_buf_size": 4096, 00:15:19.436 "enable_recv_pipe": true, 00:15:19.436 "enable_quickack": false, 00:15:19.436 "enable_placement_id": 0, 00:15:19.436 "enable_zerocopy_send_server": true, 00:15:19.436 "enable_zerocopy_send_client": false, 00:15:19.436 "zerocopy_threshold": 0, 00:15:19.436 "tls_version": 0, 00:15:19.436 "enable_ktls": false 00:15:19.436 } 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "method": "sock_impl_set_options", 00:15:19.436 "params": { 00:15:19.436 "impl_name": "posix", 00:15:19.436 "recv_buf_size": 2097152, 00:15:19.436 "send_buf_size": 2097152, 00:15:19.436 "enable_recv_pipe": true, 00:15:19.436 "enable_quickack": false, 00:15:19.436 "enable_placement_id": 0, 00:15:19.436 "enable_zerocopy_send_server": true, 00:15:19.436 "enable_zerocopy_send_client": false, 00:15:19.436 "zerocopy_threshold": 0, 00:15:19.436 "tls_version": 0, 00:15:19.436 "enable_ktls": false 00:15:19.436 } 00:15:19.436 } 00:15:19.436 ] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "vmd", 00:15:19.436 "config": [] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "accel", 00:15:19.436 "config": [ 00:15:19.436 { 00:15:19.436 "method": "accel_set_options", 00:15:19.436 "params": { 00:15:19.436 "small_cache_size": 128, 00:15:19.436 "large_cache_size": 16, 00:15:19.436 "task_count": 2048, 00:15:19.436 "sequence_count": 2048, 00:15:19.436 "buf_count": 2048 00:15:19.436 } 00:15:19.436 } 00:15:19.436 ] 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "subsystem": "bdev", 00:15:19.436 "config": [ 00:15:19.436 { 00:15:19.436 "method": "bdev_set_options", 00:15:19.436 "params": { 00:15:19.436 "bdev_io_pool_size": 65535, 00:15:19.436 "bdev_io_cache_size": 256, 00:15:19.436 "bdev_auto_examine": true, 00:15:19.436 "iobuf_small_cache_size": 128, 00:15:19.436 "iobuf_large_cache_size": 16 00:15:19.436 } 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "method": "bdev_raid_set_options", 00:15:19.436 "params": { 00:15:19.436 "process_window_size_kb": 1024, 00:15:19.436 "process_max_bandwidth_mb_sec": 0 00:15:19.436 } 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "method": "bdev_iscsi_set_options", 00:15:19.436 "params": { 00:15:19.436 "timeout_sec": 30 00:15:19.436 } 00:15:19.436 }, 00:15:19.436 { 00:15:19.436 "method": "bdev_nvme_set_options", 00:15:19.436 "params": { 00:15:19.436 "action_on_timeout": "none", 00:15:19.436 "timeout_us": 0, 00:15:19.436 "timeout_admin_us": 0, 00:15:19.436 "keep_alive_timeout_ms": 10000, 00:15:19.436 "arbitration_burst": 0, 00:15:19.436 "low_priority_weight": 0, 00:15:19.436 "medium_priority_weight": 0, 00:15:19.436 "high_priority_weight": 0, 00:15:19.436 "nvme_adminq_poll_period_us": 10000, 00:15:19.436 "nvme_ioq_poll_period_us": 0, 00:15:19.436 "io_queue_requests": 0, 00:15:19.436 "delay_cmd_submit": true, 00:15:19.436 "transport_retry_count": 4, 00:15:19.436 "bdev_retry_count": 3, 00:15:19.436 "transport_ack_timeout": 0, 00:15:19.436 "ctrlr_loss_timeout_sec": 0, 00:15:19.436 "reconnect_delay_sec": 0, 00:15:19.436 "fast_io_fail_timeout_sec": 0, 00:15:19.436 "disable_auto_failback": false, 00:15:19.436 "generate_uuids": false, 00:15:19.436 "transport_tos": 0, 00:15:19.436 "nvme_error_stat": false, 00:15:19.436 "rdma_srq_size": 0, 00:15:19.436 "io_path_stat": false, 00:15:19.436 "allow_accel_sequence": false, 00:15:19.436 "rdma_max_cq_size": 0, 00:15:19.437 "rdma_cm_event_timeout_ms": 0, 00:15:19.437 "dhchap_digests": [ 00:15:19.437 "sha256", 00:15:19.437 "sha384", 00:15:19.437 "sha512" 00:15:19.437 ], 00:15:19.437 "dhchap_dhgroups": [ 00:15:19.437 "null", 00:15:19.437 "ffdhe2048", 00:15:19.437 "ffdhe3072", 00:15:19.437 "ffdhe4096", 00:15:19.437 "ffdhe6144", 00:15:19.437 "ffdhe8192" 00:15:19.437 ] 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "bdev_nvme_set_hotplug", 00:15:19.437 "params": { 00:15:19.437 "period_us": 100000, 00:15:19.437 "enable": false 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "bdev_malloc_create", 00:15:19.437 "params": { 00:15:19.437 "name": "malloc0", 00:15:19.437 "num_blocks": 8192, 00:15:19.437 "block_size": 4096, 00:15:19.437 "physical_block_size": 4096, 00:15:19.437 "uuid": "38221dc7-05fd-4346-8e69-1f36c5445de0", 00:15:19.437 "optimal_io_boundary": 0, 00:15:19.437 "md_size": 0, 00:15:19.437 "dif_type": 0, 00:15:19.437 "dif_is_head_of_md": false, 00:15:19.437 "dif_pi_format": 0 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "bdev_wait_for_examine" 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "scsi", 00:15:19.437 "config": null 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "scheduler", 00:15:19.437 "config": [ 00:15:19.437 { 00:15:19.437 "method": "framework_set_scheduler", 00:15:19.437 "params": { 00:15:19.437 "name": "static" 00:15:19.437 } 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "vhost_scsi", 00:15:19.437 "config": [] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "vhost_blk", 00:15:19.437 "config": [] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "ublk", 00:15:19.437 "config": [ 00:15:19.437 { 00:15:19.437 "method": "ublk_create_target", 00:15:19.437 "params": { 00:15:19.437 "cpumask": "1" 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "ublk_start_disk", 00:15:19.437 "params": { 00:15:19.437 "bdev_name": "malloc0", 00:15:19.437 "ublk_id": 0, 00:15:19.437 "num_queues": 1, 00:15:19.437 "queue_depth": 128 00:15:19.437 } 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "nbd", 00:15:19.437 "config": [] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "nvmf", 00:15:19.437 "config": [ 00:15:19.437 { 00:15:19.437 "method": "nvmf_set_config", 00:15:19.437 "params": { 00:15:19.437 "discovery_filter": "match_any", 00:15:19.437 "admin_cmd_passthru": { 00:15:19.437 "identify_ctrlr": false 00:15:19.437 }, 00:15:19.437 "dhchap_digests": [ 00:15:19.437 "sha256", 00:15:19.437 "sha384", 00:15:19.437 "sha512" 00:15:19.437 ], 00:15:19.437 "dhchap_dhgroups": [ 00:15:19.437 "null", 00:15:19.437 "ffdhe2048", 00:15:19.437 "ffdhe3072", 00:15:19.437 "ffdhe4096", 00:15:19.437 "ffdhe6144", 00:15:19.437 "ffdhe8192" 00:15:19.437 ] 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "nvmf_set_max_subsystems", 00:15:19.437 "params": { 00:15:19.437 "max_subsystems": 1024 00:15:19.437 } 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "method": "nvmf_set_crdt", 00:15:19.437 "params": { 00:15:19.437 "crdt1": 0, 00:15:19.437 "crdt2": 0, 00:15:19.437 "crdt3": 0 00:15:19.437 } 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 }, 00:15:19.437 { 00:15:19.437 "subsystem": "iscsi", 00:15:19.437 "config": [ 00:15:19.437 { 00:15:19.437 "method": "iscsi_set_options", 00:15:19.437 "params": { 00:15:19.437 "node_base": "iqn.2016-06.io.spdk", 00:15:19.437 "max_sessions": 128, 00:15:19.437 "max_connections_per_session": 2, 00:15:19.437 "max_queue_depth": 64, 00:15:19.437 "default_time2wait": 2, 00:15:19.437 "default_time2retain": 20, 00:15:19.437 "first_burst_length": 8192, 00:15:19.437 "immediate_data": true, 00:15:19.437 "allow_duplicated_isid": false, 00:15:19.437 "error_recovery_level": 0, 00:15:19.437 "nop_timeout": 60, 00:15:19.437 "nop_in_interval": 30, 00:15:19.437 "disable_chap": false, 00:15:19.437 "require_chap": false, 00:15:19.437 "mutual_chap": false, 00:15:19.437 "chap_group": 0, 00:15:19.437 "max_large_datain_per_connection": 64, 00:15:19.437 "max_r2t_per_connection": 4, 00:15:19.437 "pdu_pool_size": 36864, 00:15:19.437 "immediate_data_pool_size": 16384, 00:15:19.437 "data_out_pool_size": 2048 00:15:19.437 } 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 } 00:15:19.437 ] 00:15:19.437 }' 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70908 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70908 ']' 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70908 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70908 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.437 killing process with pid 70908 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70908' 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70908 00:15:19.437 10:14:22 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70908 00:15:20.372 [2024-10-17 10:14:23.390015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.372 [2024-10-17 10:14:23.430102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.372 [2024-10-17 10:14:23.430237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.372 [2024-10-17 10:14:23.438082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.372 [2024-10-17 10:14:23.438269] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:20.372 [2024-10-17 10:14:23.438280] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:20.372 [2024-10-17 10:14:23.438306] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:20.372 [2024-10-17 10:14:23.438442] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70967 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70967 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70967 ']' 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:22.275 10:14:25 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:22.275 "subsystems": [ 00:15:22.275 { 00:15:22.275 "subsystem": "fsdev", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "fsdev_set_opts", 00:15:22.275 "params": { 00:15:22.275 "fsdev_io_pool_size": 65535, 00:15:22.275 "fsdev_io_cache_size": 256 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "keyring", 00:15:22.275 "config": [] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "iobuf", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "iobuf_set_options", 00:15:22.275 "params": { 00:15:22.275 "small_pool_count": 8192, 00:15:22.275 "large_pool_count": 1024, 00:15:22.275 "small_bufsize": 8192, 00:15:22.275 "large_bufsize": 135168 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "sock", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "sock_set_default_impl", 00:15:22.275 "params": { 00:15:22.275 "impl_name": "posix" 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "sock_impl_set_options", 00:15:22.275 "params": { 00:15:22.275 "impl_name": "ssl", 00:15:22.275 "recv_buf_size": 4096, 00:15:22.275 "send_buf_size": 4096, 00:15:22.275 "enable_recv_pipe": true, 00:15:22.275 "enable_quickack": false, 00:15:22.275 "enable_placement_id": 0, 00:15:22.275 "enable_zerocopy_send_server": true, 00:15:22.275 "enable_zerocopy_send_client": false, 00:15:22.275 "zerocopy_threshold": 0, 00:15:22.275 "tls_version": 0, 00:15:22.275 "enable_ktls": false 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "sock_impl_set_options", 00:15:22.275 "params": { 00:15:22.275 "impl_name": "posix", 00:15:22.275 "recv_buf_size": 2097152, 00:15:22.275 "send_buf_size": 2097152, 00:15:22.275 "enable_recv_pipe": true, 00:15:22.275 "enable_quickack": false, 00:15:22.275 "enable_placement_id": 0, 00:15:22.275 "enable_zerocopy_send_server": true, 00:15:22.275 "enable_zerocopy_send_client": false, 00:15:22.275 "zerocopy_threshold": 0, 00:15:22.275 "tls_version": 0, 00:15:22.275 "enable_ktls": false 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "vmd", 00:15:22.275 "config": [] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "accel", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "accel_set_options", 00:15:22.275 "params": { 00:15:22.275 "small_cache_size": 128, 00:15:22.275 "large_cache_size": 16, 00:15:22.275 "task_count": 2048, 00:15:22.275 "sequence_count": 2048, 00:15:22.275 "buf_count": 2048 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "bdev", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "bdev_set_options", 00:15:22.275 "params": { 00:15:22.275 "bdev_io_pool_size": 65535, 00:15:22.275 "bdev_io_cache_size": 256, 00:15:22.275 "bdev_auto_examine": true, 00:15:22.275 "iobuf_small_cache_size": 128, 00:15:22.275 "iobuf_large_cache_size": 16 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_raid_set_options", 00:15:22.275 "params": { 00:15:22.275 "process_window_size_kb": 1024, 00:15:22.275 "process_max_bandwidth_mb_sec": 0 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_iscsi_set_options", 00:15:22.275 "params": { 00:15:22.275 "timeout_sec": 30 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_nvme_set_options", 00:15:22.275 "params": { 00:15:22.275 "action_on_timeout": "none", 00:15:22.275 "timeout_us": 0, 00:15:22.275 "timeout_admin_us": 0, 00:15:22.275 "keep_alive_timeout_ms": 10000, 00:15:22.275 "arbitration_burst": 0, 00:15:22.275 "low_priority_weight": 0, 00:15:22.275 "medium_priority_weight": 0, 00:15:22.275 "high_priority_weight": 0, 00:15:22.275 "nvme_adminq_poll_period_us": 10000, 00:15:22.275 "nvme_ioq_poll_period_us": 0, 00:15:22.275 "io_queue_requests": 0, 00:15:22.275 "delay_cmd_submit": true, 00:15:22.275 "transport_retry_count": 4, 00:15:22.275 "bdev_retry_count": 3, 00:15:22.275 "transport_ack_timeout": 0, 00:15:22.275 "ctrlr_loss_timeout_sec": 0, 00:15:22.275 "reconnect_delay_sec": 0, 00:15:22.275 "fast_io_fail_timeout_sec": 0, 00:15:22.275 "disable_auto_failback": false, 00:15:22.275 "generate_uuids": false, 00:15:22.275 "transport_tos": 0, 00:15:22.275 "nvme_error_stat": false, 00:15:22.275 "rdma_srq_size": 0, 00:15:22.275 "io_path_stat": false, 00:15:22.275 "allow_accel_sequence": false, 00:15:22.275 "rdma_max_cq_size": 0, 00:15:22.275 "rdma_cm_event_timeout_ms": 0, 00:15:22.275 "dhchap_digests": [ 00:15:22.275 "sha256", 00:15:22.275 "sha384", 00:15:22.275 "sha512" 00:15:22.275 ], 00:15:22.275 "dhchap_dhgroups": [ 00:15:22.275 "null", 00:15:22.275 "ffdhe2048", 00:15:22.275 "ffdhe3072", 00:15:22.275 "ffdhe4096", 00:15:22.275 "ffdhe6144", 00:15:22.275 "ffdhe8192" 00:15:22.275 ] 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_nvme_set_hotplug", 00:15:22.275 "params": { 00:15:22.275 "period_us": 100000, 00:15:22.275 "enable": false 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_malloc_create", 00:15:22.275 "params": { 00:15:22.275 "name": "malloc0", 00:15:22.275 "num_blocks": 8192, 00:15:22.275 "block_size": 4096, 00:15:22.275 "physical_block_size": 4096, 00:15:22.275 "uuid": "38221dc7-05fd-4346-8e69-1f36c5445de0", 00:15:22.275 "optimal_io_boundary": 0, 00:15:22.275 "md_size": 0, 00:15:22.275 "dif_type": 0, 00:15:22.275 "dif_is_head_of_md": false, 00:15:22.275 "dif_pi_format": 0 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "bdev_wait_for_examine" 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "scsi", 00:15:22.275 "config": null 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "scheduler", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "framework_set_scheduler", 00:15:22.275 "params": { 00:15:22.275 "name": "static" 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "vhost_scsi", 00:15:22.275 "config": [] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "vhost_blk", 00:15:22.275 "config": [] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "ublk", 00:15:22.275 "config": [ 00:15:22.275 { 00:15:22.275 "method": "ublk_create_target", 00:15:22.275 "params": { 00:15:22.275 "cpumask": "1" 00:15:22.275 } 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "method": "ublk_start_disk", 00:15:22.275 "params": { 00:15:22.275 "bdev_name": "malloc0", 00:15:22.275 "ublk_id": 0, 00:15:22.275 "num_queues": 1, 00:15:22.275 "queue_depth": 128 00:15:22.275 } 00:15:22.275 } 00:15:22.275 ] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "nbd", 00:15:22.275 "config": [] 00:15:22.275 }, 00:15:22.275 { 00:15:22.275 "subsystem": "nvmf", 00:15:22.276 "config": [ 00:15:22.276 { 00:15:22.276 "method": "nvmf_set_config", 00:15:22.276 "params": { 00:15:22.276 "discovery_filter": "match_any", 00:15:22.276 "admin_cmd_passthru": { 00:15:22.276 "identify_ctrlr": false 00:15:22.276 }, 00:15:22.276 "dhchap_digests": [ 00:15:22.276 "sha256", 00:15:22.276 "sha384", 00:15:22.276 "sha512" 00:15:22.276 ], 00:15:22.276 "dhchap_dhgroups": [ 00:15:22.276 "null", 00:15:22.276 "ffdhe2048", 00:15:22.276 "ffdhe3072", 00:15:22.276 "ffdhe4096", 00:15:22.276 "ffdhe6144", 00:15:22.276 "ffdhe8192" 00:15:22.276 ] 00:15:22.276 } 00:15:22.276 }, 00:15:22.276 { 00:15:22.276 "method": "nvmf_set_max_subsystems", 00:15:22.276 "params": { 00:15:22.276 "max_subsystems": 1024 00:15:22.276 } 00:15:22.276 }, 00:15:22.276 { 00:15:22.276 "method": "nvmf_set_crdt", 00:15:22.276 "params": { 00:15:22.276 "crdt1": 0, 00:15:22.276 "crdt2": 0, 00:15:22.276 "crdt3": 0 00:15:22.276 } 00:15:22.276 } 00:15:22.276 ] 00:15:22.276 }, 00:15:22.276 { 00:15:22.276 "subsystem": "iscsi", 00:15:22.276 "config": [ 00:15:22.276 { 00:15:22.276 "method": "iscsi_set_options", 00:15:22.276 "params": { 00:15:22.276 "node_base": "iqn.2016-06.io.spdk", 00:15:22.276 "max_sessions": 128, 00:15:22.276 "max_connections_per_session": 2, 00:15:22.276 "max_queue_depth": 64, 00:15:22.276 "default_time2wait": 2, 00:15:22.276 "default_time2retain": 20, 00:15:22.276 "first_burst_length": 8192, 00:15:22.276 "immediate_data": true, 00:15:22.276 "allow_duplicated_isid": false, 00:15:22.276 "error_recovery_level": 0, 00:15:22.276 "nop_timeout": 60, 00:15:22.276 "nop_in_interval": 30, 00:15:22.276 "disable_chap": false, 00:15:22.276 "require_chap": false, 00:15:22.276 "mutual_chap": false, 00:15:22.276 "chap_group": 0, 00:15:22.276 "max_large_datain_per_connection": 64, 00:15:22.276 "max_r2t_per_connection": 4, 00:15:22.276 "pdu_pool_size": 36864, 00:15:22.276 "immediate_data_pool_size": 16384, 00:15:22.276 "data_out_pool_size": 2048 00:15:22.276 } 00:15:22.276 } 00:15:22.276 ] 00:15:22.276 } 00:15:22.276 ] 00:15:22.276 }' 00:15:22.276 [2024-10-17 10:14:25.196002] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:15:22.276 [2024-10-17 10:14:25.196126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70967 ] 00:15:22.276 [2024-10-17 10:14:25.340033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.536 [2024-10-17 10:14:25.443673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.479 [2024-10-17 10:14:26.226082] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:23.479 [2024-10-17 10:14:26.226934] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:23.479 [2024-10-17 10:14:26.234210] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:23.479 [2024-10-17 10:14:26.234295] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:23.479 [2024-10-17 10:14:26.234305] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:23.479 [2024-10-17 10:14:26.234312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:23.479 [2024-10-17 10:14:26.243182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:23.479 [2024-10-17 10:14:26.243210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:23.479 [2024-10-17 10:14:26.250091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:23.479 [2024-10-17 10:14:26.250203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:23.479 [2024-10-17 10:14:26.267092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:23.479 10:14:26 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70967 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70967 ']' 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70967 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70967 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70967' 00:15:23.480 killing process with pid 70967 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70967 00:15:23.480 10:14:26 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70967 00:15:24.865 [2024-10-17 10:14:27.555509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:24.865 [2024-10-17 10:14:27.593090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:24.865 [2024-10-17 10:14:27.593229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:24.865 [2024-10-17 10:14:27.601073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:24.865 [2024-10-17 10:14:27.601116] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:24.865 [2024-10-17 10:14:27.601124] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:24.865 [2024-10-17 10:14:27.601146] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:24.865 [2024-10-17 10:14:27.601282] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:26.244 10:14:29 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:26.244 00:15:26.244 real 0m8.178s 00:15:26.244 user 0m5.392s 00:15:26.244 sys 0m3.372s 00:15:26.244 10:14:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.244 10:14:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:26.244 ************************************ 00:15:26.244 END TEST test_save_ublk_config 00:15:26.244 ************************************ 00:15:26.244 10:14:29 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71042 00:15:26.244 10:14:29 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:26.244 10:14:29 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:26.244 10:14:29 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71042 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@831 -- # '[' -z 71042 ']' 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.244 10:14:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:26.244 [2024-10-17 10:14:29.292594] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:15:26.244 [2024-10-17 10:14:29.292680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71042 ] 00:15:26.502 [2024-10-17 10:14:29.430333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:26.502 [2024-10-17 10:14:29.513992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.502 [2024-10-17 10:14:29.514011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.134 10:14:30 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.134 10:14:30 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:27.134 10:14:30 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:27.134 10:14:30 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:27.134 10:14:30 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.134 10:14:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 ************************************ 00:15:27.134 START TEST test_create_ublk 00:15:27.134 ************************************ 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:27.134 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 [2024-10-17 10:14:30.152070] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:27.134 [2024-10-17 10:14:30.153675] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.134 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:27.134 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.134 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.393 [2024-10-17 10:14:30.312174] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:27.393 [2024-10-17 10:14:30.312487] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:27.393 [2024-10-17 10:14:30.312502] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:27.393 [2024-10-17 10:14:30.312508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:27.393 [2024-10-17 10:14:30.321235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:27.393 [2024-10-17 10:14:30.321253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:27.393 [2024-10-17 10:14:30.328076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:27.393 [2024-10-17 10:14:30.336109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:27.393 [2024-10-17 10:14:30.346177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.393 10:14:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:27.393 { 00:15:27.393 "ublk_device": "/dev/ublkb0", 00:15:27.393 "id": 0, 00:15:27.393 "queue_depth": 512, 00:15:27.393 "num_queues": 4, 00:15:27.393 "bdev_name": "Malloc0" 00:15:27.393 } 00:15:27.393 ]' 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:27.393 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:27.652 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:27.652 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:27.652 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:27.652 10:14:30 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:27.652 10:14:30 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:27.652 fio: verification read phase will never start because write phase uses all of runtime 00:15:27.652 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:27.652 fio-3.35 00:15:27.652 Starting 1 process 00:15:39.895 00:15:39.895 fio_test: (groupid=0, jobs=1): err= 0: pid=71086: Thu Oct 17 10:14:40 2024 00:15:39.895 write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(777MiB/10001msec); 0 zone resets 00:15:39.895 clat (usec): min=35, max=4159, avg=49.47, stdev=80.53 00:15:39.895 lat (usec): min=35, max=4175, avg=49.91, stdev=80.54 00:15:39.895 clat percentiles (usec): 00:15:39.895 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:15:39.895 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:15:39.895 | 70.00th=[ 49], 80.00th=[ 50], 90.00th=[ 54], 95.00th=[ 58], 00:15:39.895 | 99.00th=[ 67], 99.50th=[ 72], 99.90th=[ 1401], 99.95th=[ 2343], 00:15:39.895 | 99.99th=[ 3294] 00:15:39.895 bw ( KiB/s): min=69312, max=83720, per=100.00%, avg=79704.00, stdev=3560.64, samples=19 00:15:39.895 iops : min=17328, max=20930, avg=19926.00, stdev=890.16, samples=19 00:15:39.895 lat (usec) : 50=80.15%, 100=19.62%, 250=0.05%, 500=0.04%, 750=0.01% 00:15:39.895 lat (usec) : 1000=0.01% 00:15:39.895 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:15:39.895 cpu : usr=3.06%, sys=13.86%, ctx=199038, majf=0, minf=797 00:15:39.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.895 issued rwts: total=0,199039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.895 00:15:39.895 Run status group 0 (all jobs): 00:15:39.895 WRITE: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=777MiB (815MB), run=10001-10001msec 00:15:39.895 00:15:39.895 Disk stats (read/write): 00:15:39.895 ublkb0: ios=0/196997, merge=0/0, ticks=0/8331, in_queue=8331, util=99.09% 00:15:39.895 10:14:40 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.895 [2024-10-17 10:14:40.775000] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.895 [2024-10-17 10:14:40.814556] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.895 [2024-10-17 10:14:40.815447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.895 [2024-10-17 10:14:40.828070] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.895 [2024-10-17 10:14:40.828318] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:39.895 [2024-10-17 10:14:40.828329] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.895 10:14:40 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.895 [2024-10-17 10:14:40.834155] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:39.895 request: 00:15:39.895 { 00:15:39.895 "ublk_id": 0, 00:15:39.895 "method": "ublk_stop_disk", 00:15:39.895 "req_id": 1 00:15:39.895 } 00:15:39.895 Got JSON-RPC error response 00:15:39.895 response: 00:15:39.895 { 00:15:39.895 "code": -19, 00:15:39.895 "message": "No such device" 00:15:39.895 } 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.895 10:14:40 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.895 [2024-10-17 10:14:40.852128] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:39.895 [2024-10-17 10:14:40.855851] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:39.895 [2024-10-17 10:14:40.855886] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.895 10:14:40 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.895 10:14:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.895 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.895 10:14:41 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:39.895 10:14:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:39.896 ************************************ 00:15:39.896 END TEST test_create_ublk 00:15:39.896 ************************************ 00:15:39.896 10:14:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:39.896 00:15:39.896 real 0m11.176s 00:15:39.896 user 0m0.613s 00:15:39.896 sys 0m1.466s 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:41 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:39.896 10:14:41 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:39.896 10:14:41 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.896 10:14:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 ************************************ 00:15:39.896 START TEST test_create_multi_ublk 00:15:39.896 ************************************ 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 [2024-10-17 10:14:41.368061] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:39.896 [2024-10-17 10:14:41.369657] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 [2024-10-17 10:14:41.596174] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:39.896 [2024-10-17 10:14:41.596493] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:39.896 [2024-10-17 10:14:41.596506] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:39.896 [2024-10-17 10:14:41.596514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.896 [2024-10-17 10:14:41.620076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.896 [2024-10-17 10:14:41.620098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.896 [2024-10-17 10:14:41.632066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.896 [2024-10-17 10:14:41.632583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:39.896 [2024-10-17 10:14:41.672067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 [2024-10-17 10:14:41.888184] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:39.896 [2024-10-17 10:14:41.888507] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:39.896 [2024-10-17 10:14:41.888521] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:39.896 [2024-10-17 10:14:41.888527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.896 [2024-10-17 10:14:41.896077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.896 [2024-10-17 10:14:41.896096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.896 [2024-10-17 10:14:41.904071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.896 [2024-10-17 10:14:41.904586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:39.896 [2024-10-17 10:14:41.913106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 [2024-10-17 10:14:42.080173] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:39.896 [2024-10-17 10:14:42.080480] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:39.896 [2024-10-17 10:14:42.080492] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:39.896 [2024-10-17 10:14:42.080499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.896 [2024-10-17 10:14:42.088077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.896 [2024-10-17 10:14:42.088099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.896 [2024-10-17 10:14:42.096081] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.896 [2024-10-17 10:14:42.096601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:39.896 [2024-10-17 10:14:42.105090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 [2024-10-17 10:14:42.272170] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:39.896 [2024-10-17 10:14:42.272472] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:39.896 [2024-10-17 10:14:42.272485] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:39.896 [2024-10-17 10:14:42.272490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.896 [2024-10-17 10:14:42.280092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.896 [2024-10-17 10:14:42.280112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.896 [2024-10-17 10:14:42.288082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.896 [2024-10-17 10:14:42.288607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:39.896 [2024-10-17 10:14:42.297120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.896 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:39.896 { 00:15:39.896 "ublk_device": "/dev/ublkb0", 00:15:39.896 "id": 0, 00:15:39.896 "queue_depth": 512, 00:15:39.896 "num_queues": 4, 00:15:39.896 "bdev_name": "Malloc0" 00:15:39.896 }, 00:15:39.896 { 00:15:39.896 "ublk_device": "/dev/ublkb1", 00:15:39.896 "id": 1, 00:15:39.896 "queue_depth": 512, 00:15:39.896 "num_queues": 4, 00:15:39.897 "bdev_name": "Malloc1" 00:15:39.897 }, 00:15:39.897 { 00:15:39.897 "ublk_device": "/dev/ublkb2", 00:15:39.897 "id": 2, 00:15:39.897 "queue_depth": 512, 00:15:39.897 "num_queues": 4, 00:15:39.897 "bdev_name": "Malloc2" 00:15:39.897 }, 00:15:39.897 { 00:15:39.897 "ublk_device": "/dev/ublkb3", 00:15:39.897 "id": 3, 00:15:39.897 "queue_depth": 512, 00:15:39.897 "num_queues": 4, 00:15:39.897 "bdev_name": "Malloc3" 00:15:39.897 } 00:15:39.897 ]' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.897 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.158 10:14:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.158 [2024-10-17 10:14:43.000167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.158 [2024-10-17 10:14:43.048107] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.158 [2024-10-17 10:14:43.048810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.158 [2024-10-17 10:14:43.057099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.158 [2024-10-17 10:14:43.057347] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:40.158 [2024-10-17 10:14:43.057357] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.158 [2024-10-17 10:14:43.071147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.158 [2024-10-17 10:14:43.104453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.158 [2024-10-17 10:14:43.105482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.158 [2024-10-17 10:14:43.112080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.158 [2024-10-17 10:14:43.112320] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:40.158 [2024-10-17 10:14:43.112333] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:40.158 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.159 [2024-10-17 10:14:43.128145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.159 [2024-10-17 10:14:43.160102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.159 [2024-10-17 10:14:43.160752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.159 [2024-10-17 10:14:43.168079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.159 [2024-10-17 10:14:43.168306] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:40.159 [2024-10-17 10:14:43.168319] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.159 [2024-10-17 10:14:43.184133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.159 [2024-10-17 10:14:43.224099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.159 [2024-10-17 10:14:43.224694] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.159 [2024-10-17 10:14:43.233112] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.159 [2024-10-17 10:14:43.233354] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:40.159 [2024-10-17 10:14:43.233365] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.159 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:40.429 [2024-10-17 10:14:43.424135] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:40.429 [2024-10-17 10:14:43.427819] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:40.429 [2024-10-17 10:14:43.427851] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:40.429 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:40.429 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.429 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:40.429 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.429 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.999 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.999 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.999 10:14:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:40.999 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.999 10:14:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.257 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.257 10:14:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.257 10:14:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:41.257 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.257 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.515 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.515 10:14:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.515 10:14:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:41.516 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:41.777 ************************************ 00:15:41.777 END TEST test_create_multi_ublk 00:15:41.777 ************************************ 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:41.777 00:15:41.777 real 0m3.310s 00:15:41.777 user 0m0.849s 00:15:41.777 sys 0m0.152s 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.777 10:14:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 10:14:44 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:41.777 10:14:44 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:41.777 10:14:44 ublk -- ublk/ublk.sh@130 -- # killprocess 71042 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@950 -- # '[' -z 71042 ']' 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@954 -- # kill -0 71042 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@955 -- # uname 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71042 00:15:41.777 killing process with pid 71042 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71042' 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@969 -- # kill 71042 00:15:41.777 10:14:44 ublk -- common/autotest_common.sh@974 -- # wait 71042 00:15:42.349 [2024-10-17 10:14:45.269019] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.349 [2024-10-17 10:14:45.269067] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:42.921 00:15:42.921 real 0m25.062s 00:15:42.921 user 0m35.162s 00:15:42.921 sys 0m10.004s 00:15:42.921 10:14:45 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.921 10:14:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.921 ************************************ 00:15:42.921 END TEST ublk 00:15:42.921 ************************************ 00:15:42.921 10:14:45 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:42.921 10:14:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:42.921 10:14:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.921 10:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:42.921 ************************************ 00:15:42.921 START TEST ublk_recovery 00:15:42.921 ************************************ 00:15:42.921 10:14:45 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:43.182 * Looking for test storage... 00:15:43.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:43.182 10:14:46 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:43.182 10:14:46 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:43.182 10:14:46 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:43.182 10:14:46 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:43.182 10:14:46 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.183 10:14:46 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.183 10:14:46 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.183 10:14:46 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.183 --rc genhtml_branch_coverage=1 00:15:43.183 --rc genhtml_function_coverage=1 00:15:43.183 --rc genhtml_legend=1 00:15:43.183 --rc geninfo_all_blocks=1 00:15:43.183 --rc geninfo_unexecuted_blocks=1 00:15:43.183 00:15:43.183 ' 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.183 --rc genhtml_branch_coverage=1 00:15:43.183 --rc genhtml_function_coverage=1 00:15:43.183 --rc genhtml_legend=1 00:15:43.183 --rc geninfo_all_blocks=1 00:15:43.183 --rc geninfo_unexecuted_blocks=1 00:15:43.183 00:15:43.183 ' 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.183 --rc genhtml_branch_coverage=1 00:15:43.183 --rc genhtml_function_coverage=1 00:15:43.183 --rc genhtml_legend=1 00:15:43.183 --rc geninfo_all_blocks=1 00:15:43.183 --rc geninfo_unexecuted_blocks=1 00:15:43.183 00:15:43.183 ' 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.183 --rc genhtml_branch_coverage=1 00:15:43.183 --rc genhtml_function_coverage=1 00:15:43.183 --rc genhtml_legend=1 00:15:43.183 --rc geninfo_all_blocks=1 00:15:43.183 --rc geninfo_unexecuted_blocks=1 00:15:43.183 00:15:43.183 ' 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:43.183 10:14:46 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71434 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71434 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71434 ']' 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.183 10:14:46 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.183 10:14:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.183 [2024-10-17 10:14:46.209983] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:15:43.183 [2024-10-17 10:14:46.210263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71434 ] 00:15:43.444 [2024-10-17 10:14:46.358679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.444 [2024-10-17 10:14:46.439294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.444 [2024-10-17 10:14:46.439387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:44.015 10:14:47 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.015 [2024-10-17 10:14:47.070071] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:44.015 [2024-10-17 10:14:47.071722] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.015 10:14:47 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.015 10:14:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.274 malloc0 00:15:44.274 10:14:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.274 10:14:47 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:44.274 10:14:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.274 10:14:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.274 [2024-10-17 10:14:47.158190] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:44.274 [2024-10-17 10:14:47.158273] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:44.274 [2024-10-17 10:14:47.158283] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:44.274 [2024-10-17 10:14:47.158289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:44.274 [2024-10-17 10:14:47.167149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:44.274 [2024-10-17 10:14:47.167169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:44.274 [2024-10-17 10:14:47.174077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:44.274 [2024-10-17 10:14:47.174195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:44.274 [2024-10-17 10:14:47.196079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:44.274 1 00:15:44.274 10:14:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.274 10:14:47 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:45.214 10:14:48 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71469 00:15:45.214 10:14:48 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:45.214 10:14:48 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:45.473 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.473 fio-3.35 00:15:45.473 Starting 1 process 00:15:50.762 10:14:53 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71434 00:15:50.762 10:14:53 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:56.073 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71434 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:56.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.073 10:14:58 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71580 00:15:56.073 10:14:58 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.073 10:14:58 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.073 10:14:58 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71580 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71580 ']' 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.073 10:14:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 [2024-10-17 10:14:58.292994] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:15:56.073 [2024-10-17 10:14:58.293142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71580 ] 00:15:56.073 [2024-10-17 10:14:58.441973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.073 [2024-10-17 10:14:58.543590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.073 [2024-10-17 10:14:58.543596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:56.073 10:14:59 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 [2024-10-17 10:14:59.154083] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.073 [2024-10-17 10:14:59.155979] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.073 10:14:59 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.073 10:14:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.332 malloc0 00:15:56.332 10:14:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.332 10:14:59 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:56.332 10:14:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.332 10:14:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.332 [2024-10-17 10:14:59.257215] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:56.332 [2024-10-17 10:14:59.257260] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:56.332 [2024-10-17 10:14:59.257270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:56.332 [2024-10-17 10:14:59.265099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:56.332 [2024-10-17 10:14:59.265124] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:56.332 1 00:15:56.332 10:14:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.332 10:14:59 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71469 00:15:57.295 [2024-10-17 10:15:00.266079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:57.295 [2024-10-17 10:15:00.274085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:57.295 [2024-10-17 10:15:00.274106] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:58.233 [2024-10-17 10:15:01.274139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:58.233 [2024-10-17 10:15:01.282071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:58.233 [2024-10-17 10:15:01.282093] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:59.606 [2024-10-17 10:15:02.282120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:59.606 [2024-10-17 10:15:02.290077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:59.606 [2024-10-17 10:15:02.290106] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:59.606 [2024-10-17 10:15:02.290115] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:59.606 [2024-10-17 10:15:02.290192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:21.548 [2024-10-17 10:15:23.546080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:21.548 [2024-10-17 10:15:23.552030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:21.548 [2024-10-17 10:15:23.559250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:21.548 [2024-10-17 10:15:23.559269] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:48.078 00:16:48.078 fio_test: (groupid=0, jobs=1): err= 0: pid=71473: Thu Oct 17 10:15:48 2024 00:16:48.078 read: IOPS=15.2k, BW=59.6MiB/s (62.5MB/s)(3574MiB/60002msec) 00:16:48.078 slat (nsec): min=1033, max=501684, avg=4822.26, stdev=1552.65 00:16:48.078 clat (usec): min=640, max=30357k, avg=4389.15, stdev=267393.57 00:16:48.078 lat (usec): min=645, max=30357k, avg=4393.98, stdev=267393.57 00:16:48.078 clat percentiles (usec): 00:16:48.078 | 1.00th=[ 1680], 5.00th=[ 1778], 10.00th=[ 1795], 20.00th=[ 1827], 00:16:48.078 | 30.00th=[ 1860], 40.00th=[ 1876], 50.00th=[ 1909], 60.00th=[ 1942], 00:16:48.078 | 70.00th=[ 1975], 80.00th=[ 2008], 90.00th=[ 2073], 95.00th=[ 2933], 00:16:48.078 | 99.00th=[ 4948], 99.50th=[ 5342], 99.90th=[ 7111], 99.95th=[ 8291], 00:16:48.078 | 99.99th=[13173] 00:16:48.078 bw ( KiB/s): min=46414, max=131912, per=100.00%, avg=122233.64, stdev=14894.42, samples=59 00:16:48.078 iops : min=11603, max=32978, avg=30558.42, stdev=3723.64, samples=59 00:16:48.078 write: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(3568MiB/60002msec); 0 zone resets 00:16:48.078 slat (nsec): min=1057, max=120300, avg=4848.85, stdev=1425.72 00:16:48.078 clat (usec): min=583, max=30358k, avg=4001.47, stdev=239785.28 00:16:48.078 lat (usec): min=587, max=30358k, avg=4006.32, stdev=239785.28 00:16:48.078 clat percentiles (usec): 00:16:48.078 | 1.00th=[ 1713], 5.00th=[ 1860], 10.00th=[ 1876], 20.00th=[ 1909], 00:16:48.078 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1991], 60.00th=[ 2024], 00:16:48.078 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2147], 95.00th=[ 2835], 00:16:48.078 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 7177], 99.95th=[ 8717], 00:16:48.078 | 99.99th=[13173] 00:16:48.078 bw ( KiB/s): min=46358, max=131200, per=100.00%, avg=122062.32, stdev=15042.51, samples=59 00:16:48.078 iops : min=11589, max=32800, avg=30515.56, stdev=3760.64, samples=59 00:16:48.078 lat (usec) : 750=0.01%, 1000=0.01% 00:16:48.078 lat (msec) : 2=65.00%, 4=32.34%, 10=2.62%, 20=0.03%, >=2000=0.01% 00:16:48.078 cpu : usr=3.29%, sys=15.03%, ctx=61493, majf=0, minf=13 00:16:48.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:48.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.078 issued rwts: total=914879,913380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.078 00:16:48.078 Run status group 0 (all jobs): 00:16:48.078 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=3574MiB (3747MB), run=60002-60002msec 00:16:48.078 WRITE: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=3568MiB (3741MB), run=60002-60002msec 00:16:48.078 00:16:48.078 Disk stats (read/write): 00:16:48.078 ublkb1: ios=911987/910445, merge=0/0, ticks=3964716/3531158, in_queue=7495874, util=99.93% 00:16:48.078 10:15:48 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.078 [2024-10-17 10:15:48.464303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.078 [2024-10-17 10:15:48.502090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.078 [2024-10-17 10:15:48.502263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.078 [2024-10-17 10:15:48.513094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.078 [2024-10-17 10:15:48.517154] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:48.078 [2024-10-17 10:15:48.517166] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.078 10:15:48 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.078 [2024-10-17 10:15:48.521244] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.078 [2024-10-17 10:15:48.529393] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.078 [2024-10-17 10:15:48.533073] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.078 10:15:48 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:48.078 10:15:48 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:48.078 10:15:48 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71580 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71580 ']' 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71580 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.078 10:15:48 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71580 00:16:48.079 10:15:48 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.079 10:15:48 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.079 10:15:48 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71580' 00:16:48.079 killing process with pid 71580 00:16:48.079 10:15:48 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71580 00:16:48.079 10:15:48 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71580 00:16:48.079 [2024-10-17 10:15:49.599978] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.079 [2024-10-17 10:15:49.600157] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.079 00:16:48.079 real 1m4.322s 00:16:48.079 user 1m48.026s 00:16:48.079 sys 0m21.015s 00:16:48.079 10:15:50 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.079 10:15:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 ************************************ 00:16:48.079 END TEST ublk_recovery 00:16:48.079 ************************************ 00:16:48.079 10:15:50 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:48.079 10:15:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.079 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 10:15:50 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:48.079 10:15:50 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:48.079 10:15:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:48.079 10:15:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.079 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 ************************************ 00:16:48.079 START TEST ftl 00:16:48.079 ************************************ 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:48.079 * Looking for test storage... 00:16:48.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.079 10:15:50 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.079 10:15:50 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.079 10:15:50 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.079 10:15:50 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.079 10:15:50 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.079 10:15:50 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:48.079 10:15:50 ftl -- scripts/common.sh@345 -- # : 1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.079 10:15:50 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.079 10:15:50 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@353 -- # local d=1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.079 10:15:50 ftl -- scripts/common.sh@355 -- # echo 1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.079 10:15:50 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@353 -- # local d=2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.079 10:15:50 ftl -- scripts/common.sh@355 -- # echo 2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.079 10:15:50 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.079 10:15:50 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.079 10:15:50 ftl -- scripts/common.sh@368 -- # return 0 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:48.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.079 --rc genhtml_branch_coverage=1 00:16:48.079 --rc genhtml_function_coverage=1 00:16:48.079 --rc genhtml_legend=1 00:16:48.079 --rc geninfo_all_blocks=1 00:16:48.079 --rc geninfo_unexecuted_blocks=1 00:16:48.079 00:16:48.079 ' 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:48.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.079 --rc genhtml_branch_coverage=1 00:16:48.079 --rc genhtml_function_coverage=1 00:16:48.079 --rc genhtml_legend=1 00:16:48.079 --rc geninfo_all_blocks=1 00:16:48.079 --rc geninfo_unexecuted_blocks=1 00:16:48.079 00:16:48.079 ' 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:48.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.079 --rc genhtml_branch_coverage=1 00:16:48.079 --rc genhtml_function_coverage=1 00:16:48.079 --rc genhtml_legend=1 00:16:48.079 --rc geninfo_all_blocks=1 00:16:48.079 --rc geninfo_unexecuted_blocks=1 00:16:48.079 00:16:48.079 ' 00:16:48.079 10:15:50 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:48.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.079 --rc genhtml_branch_coverage=1 00:16:48.079 --rc genhtml_function_coverage=1 00:16:48.079 --rc genhtml_legend=1 00:16:48.079 --rc geninfo_all_blocks=1 00:16:48.079 --rc geninfo_unexecuted_blocks=1 00:16:48.079 00:16:48.079 ' 00:16:48.079 10:15:50 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:48.079 10:15:50 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:48.079 10:15:50 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.079 10:15:50 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.079 10:15:50 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:48.079 10:15:50 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:48.079 10:15:50 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.079 10:15:50 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.079 10:15:50 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.079 10:15:50 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:48.079 10:15:50 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:48.079 10:15:50 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:48.079 10:15:50 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:48.079 10:15:50 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.079 10:15:50 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.079 10:15:50 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:48.079 10:15:50 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:48.080 10:15:50 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:48.080 10:15:50 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:48.080 10:15:50 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:48.080 10:15:50 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:48.080 10:15:50 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:48.080 10:15:50 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:48.080 10:15:50 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:48.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.080 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:48.080 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:48.080 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:48.080 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72385 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:48.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.080 10:15:50 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72385 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@831 -- # '[' -z 72385 ']' 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.080 10:15:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:48.080 [2024-10-17 10:15:51.021347] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:16:48.080 [2024-10-17 10:15:51.021633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72385 ] 00:16:48.341 [2024-10-17 10:15:51.171390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.341 [2024-10-17 10:15:51.272902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.918 10:15:51 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.918 10:15:51 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:48.918 10:15:51 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:49.177 10:15:52 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:49.745 10:15:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:49.745 10:15:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:50.318 10:15:53 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:50.318 10:15:53 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:50.318 10:15:53 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@50 -- # break 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:50.579 10:15:53 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:50.839 10:15:53 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:50.839 10:15:53 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:50.839 10:15:53 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:50.839 10:15:53 ftl -- ftl/ftl.sh@63 -- # break 00:16:50.839 10:15:53 ftl -- ftl/ftl.sh@66 -- # killprocess 72385 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@950 -- # '[' -z 72385 ']' 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@954 -- # kill -0 72385 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@955 -- # uname 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72385 00:16:50.839 killing process with pid 72385 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72385' 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@969 -- # kill 72385 00:16:50.839 10:15:53 ftl -- common/autotest_common.sh@974 -- # wait 72385 00:16:52.215 10:15:54 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:52.215 10:15:54 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:52.215 10:15:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.215 10:15:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.215 10:15:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:52.215 ************************************ 00:16:52.215 START TEST ftl_fio_basic 00:16:52.215 ************************************ 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:52.215 * Looking for test storage... 00:16:52.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:52.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.215 --rc genhtml_branch_coverage=1 00:16:52.215 --rc genhtml_function_coverage=1 00:16:52.215 --rc genhtml_legend=1 00:16:52.215 --rc geninfo_all_blocks=1 00:16:52.215 --rc geninfo_unexecuted_blocks=1 00:16:52.215 00:16:52.215 ' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:52.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.215 --rc genhtml_branch_coverage=1 00:16:52.215 --rc genhtml_function_coverage=1 00:16:52.215 --rc genhtml_legend=1 00:16:52.215 --rc geninfo_all_blocks=1 00:16:52.215 --rc geninfo_unexecuted_blocks=1 00:16:52.215 00:16:52.215 ' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:52.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.215 --rc genhtml_branch_coverage=1 00:16:52.215 --rc genhtml_function_coverage=1 00:16:52.215 --rc genhtml_legend=1 00:16:52.215 --rc geninfo_all_blocks=1 00:16:52.215 --rc geninfo_unexecuted_blocks=1 00:16:52.215 00:16:52.215 ' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:52.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.215 --rc genhtml_branch_coverage=1 00:16:52.215 --rc genhtml_function_coverage=1 00:16:52.215 --rc genhtml_legend=1 00:16:52.215 --rc geninfo_all_blocks=1 00:16:52.215 --rc geninfo_unexecuted_blocks=1 00:16:52.215 00:16:52.215 ' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:52.215 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72517 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72517 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72517 ']' 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.216 10:15:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:52.216 [2024-10-17 10:15:55.249784] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:16:52.216 [2024-10-17 10:15:55.250098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72517 ] 00:16:52.474 [2024-10-17 10:15:55.392817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.474 [2024-10-17 10:15:55.477831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.474 [2024-10-17 10:15:55.478218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.474 [2024-10-17 10:15:55.478246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:53.041 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:53.300 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:53.558 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.558 { 00:16:53.558 "name": "nvme0n1", 00:16:53.558 "aliases": [ 00:16:53.558 "5c4659a6-2df3-4af6-915c-5fddf52162ee" 00:16:53.558 ], 00:16:53.558 "product_name": "NVMe disk", 00:16:53.558 "block_size": 4096, 00:16:53.558 "num_blocks": 1310720, 00:16:53.559 "uuid": "5c4659a6-2df3-4af6-915c-5fddf52162ee", 00:16:53.559 "numa_id": -1, 00:16:53.559 "assigned_rate_limits": { 00:16:53.559 "rw_ios_per_sec": 0, 00:16:53.559 "rw_mbytes_per_sec": 0, 00:16:53.559 "r_mbytes_per_sec": 0, 00:16:53.559 "w_mbytes_per_sec": 0 00:16:53.559 }, 00:16:53.559 "claimed": false, 00:16:53.559 "zoned": false, 00:16:53.559 "supported_io_types": { 00:16:53.559 "read": true, 00:16:53.559 "write": true, 00:16:53.559 "unmap": true, 00:16:53.559 "flush": true, 00:16:53.559 "reset": true, 00:16:53.559 "nvme_admin": true, 00:16:53.559 "nvme_io": true, 00:16:53.559 "nvme_io_md": false, 00:16:53.559 "write_zeroes": true, 00:16:53.559 "zcopy": false, 00:16:53.559 "get_zone_info": false, 00:16:53.559 "zone_management": false, 00:16:53.559 "zone_append": false, 00:16:53.559 "compare": true, 00:16:53.559 "compare_and_write": false, 00:16:53.559 "abort": true, 00:16:53.559 "seek_hole": false, 00:16:53.559 "seek_data": false, 00:16:53.559 "copy": true, 00:16:53.559 "nvme_iov_md": false 00:16:53.559 }, 00:16:53.559 "driver_specific": { 00:16:53.559 "nvme": [ 00:16:53.559 { 00:16:53.559 "pci_address": "0000:00:11.0", 00:16:53.559 "trid": { 00:16:53.559 "trtype": "PCIe", 00:16:53.559 "traddr": "0000:00:11.0" 00:16:53.559 }, 00:16:53.559 "ctrlr_data": { 00:16:53.559 "cntlid": 0, 00:16:53.559 "vendor_id": "0x1b36", 00:16:53.559 "model_number": "QEMU NVMe Ctrl", 00:16:53.559 "serial_number": "12341", 00:16:53.559 "firmware_revision": "8.0.0", 00:16:53.559 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:53.559 "oacs": { 00:16:53.559 "security": 0, 00:16:53.559 "format": 1, 00:16:53.559 "firmware": 0, 00:16:53.559 "ns_manage": 1 00:16:53.559 }, 00:16:53.559 "multi_ctrlr": false, 00:16:53.559 "ana_reporting": false 00:16:53.559 }, 00:16:53.559 "vs": { 00:16:53.559 "nvme_version": "1.4" 00:16:53.559 }, 00:16:53.559 "ns_data": { 00:16:53.559 "id": 1, 00:16:53.559 "can_share": false 00:16:53.559 } 00:16:53.559 } 00:16:53.559 ], 00:16:53.559 "mp_policy": "active_passive" 00:16:53.559 } 00:16:53.559 } 00:16:53.559 ]' 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:53.559 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:53.817 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:53.817 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:54.075 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=54248088-ee63-4328-b43c-8387b77b3f4f 00:16:54.075 10:15:56 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54248088-ee63-4328-b43c-8387b77b3f4f 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.076 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.332 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.332 { 00:16:54.332 "name": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:54.332 "aliases": [ 00:16:54.332 "lvs/nvme0n1p0" 00:16:54.332 ], 00:16:54.332 "product_name": "Logical Volume", 00:16:54.332 "block_size": 4096, 00:16:54.332 "num_blocks": 26476544, 00:16:54.332 "uuid": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:54.332 "assigned_rate_limits": { 00:16:54.332 "rw_ios_per_sec": 0, 00:16:54.332 "rw_mbytes_per_sec": 0, 00:16:54.332 "r_mbytes_per_sec": 0, 00:16:54.332 "w_mbytes_per_sec": 0 00:16:54.332 }, 00:16:54.332 "claimed": false, 00:16:54.332 "zoned": false, 00:16:54.332 "supported_io_types": { 00:16:54.332 "read": true, 00:16:54.332 "write": true, 00:16:54.332 "unmap": true, 00:16:54.332 "flush": false, 00:16:54.332 "reset": true, 00:16:54.332 "nvme_admin": false, 00:16:54.332 "nvme_io": false, 00:16:54.332 "nvme_io_md": false, 00:16:54.333 "write_zeroes": true, 00:16:54.333 "zcopy": false, 00:16:54.333 "get_zone_info": false, 00:16:54.333 "zone_management": false, 00:16:54.333 "zone_append": false, 00:16:54.333 "compare": false, 00:16:54.333 "compare_and_write": false, 00:16:54.333 "abort": false, 00:16:54.333 "seek_hole": true, 00:16:54.333 "seek_data": true, 00:16:54.333 "copy": false, 00:16:54.333 "nvme_iov_md": false 00:16:54.333 }, 00:16:54.333 "driver_specific": { 00:16:54.333 "lvol": { 00:16:54.333 "lvol_store_uuid": "54248088-ee63-4328-b43c-8387b77b3f4f", 00:16:54.333 "base_bdev": "nvme0n1", 00:16:54.333 "thin_provision": true, 00:16:54.333 "num_allocated_clusters": 0, 00:16:54.333 "snapshot": false, 00:16:54.333 "clone": false, 00:16:54.333 "esnap_clone": false 00:16:54.333 } 00:16:54.333 } 00:16:54.333 } 00:16:54.333 ]' 00:16:54.333 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.333 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.333 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:54.590 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.849 { 00:16:54.849 "name": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:54.849 "aliases": [ 00:16:54.849 "lvs/nvme0n1p0" 00:16:54.849 ], 00:16:54.849 "product_name": "Logical Volume", 00:16:54.849 "block_size": 4096, 00:16:54.849 "num_blocks": 26476544, 00:16:54.849 "uuid": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:54.849 "assigned_rate_limits": { 00:16:54.849 "rw_ios_per_sec": 0, 00:16:54.849 "rw_mbytes_per_sec": 0, 00:16:54.849 "r_mbytes_per_sec": 0, 00:16:54.849 "w_mbytes_per_sec": 0 00:16:54.849 }, 00:16:54.849 "claimed": false, 00:16:54.849 "zoned": false, 00:16:54.849 "supported_io_types": { 00:16:54.849 "read": true, 00:16:54.849 "write": true, 00:16:54.849 "unmap": true, 00:16:54.849 "flush": false, 00:16:54.849 "reset": true, 00:16:54.849 "nvme_admin": false, 00:16:54.849 "nvme_io": false, 00:16:54.849 "nvme_io_md": false, 00:16:54.849 "write_zeroes": true, 00:16:54.849 "zcopy": false, 00:16:54.849 "get_zone_info": false, 00:16:54.849 "zone_management": false, 00:16:54.849 "zone_append": false, 00:16:54.849 "compare": false, 00:16:54.849 "compare_and_write": false, 00:16:54.849 "abort": false, 00:16:54.849 "seek_hole": true, 00:16:54.849 "seek_data": true, 00:16:54.849 "copy": false, 00:16:54.849 "nvme_iov_md": false 00:16:54.849 }, 00:16:54.849 "driver_specific": { 00:16:54.849 "lvol": { 00:16:54.849 "lvol_store_uuid": "54248088-ee63-4328-b43c-8387b77b3f4f", 00:16:54.849 "base_bdev": "nvme0n1", 00:16:54.849 "thin_provision": true, 00:16:54.849 "num_allocated_clusters": 0, 00:16:54.849 "snapshot": false, 00:16:54.849 "clone": false, 00:16:54.849 "esnap_clone": false 00:16:54.849 } 00:16:54.849 } 00:16:54.849 } 00:16:54.849 ]' 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.849 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:55.107 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:55.107 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:55.107 10:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:55.107 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:55.107 10:15:57 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:55.107 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:55.107 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0de301e0-87ae-4c09-8f6c-4d1e9df82714 00:16:55.366 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:55.366 { 00:16:55.366 "name": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:55.366 "aliases": [ 00:16:55.367 "lvs/nvme0n1p0" 00:16:55.367 ], 00:16:55.367 "product_name": "Logical Volume", 00:16:55.367 "block_size": 4096, 00:16:55.367 "num_blocks": 26476544, 00:16:55.367 "uuid": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:16:55.367 "assigned_rate_limits": { 00:16:55.367 "rw_ios_per_sec": 0, 00:16:55.367 "rw_mbytes_per_sec": 0, 00:16:55.367 "r_mbytes_per_sec": 0, 00:16:55.367 "w_mbytes_per_sec": 0 00:16:55.367 }, 00:16:55.367 "claimed": false, 00:16:55.367 "zoned": false, 00:16:55.367 "supported_io_types": { 00:16:55.367 "read": true, 00:16:55.367 "write": true, 00:16:55.367 "unmap": true, 00:16:55.367 "flush": false, 00:16:55.367 "reset": true, 00:16:55.367 "nvme_admin": false, 00:16:55.367 "nvme_io": false, 00:16:55.367 "nvme_io_md": false, 00:16:55.367 "write_zeroes": true, 00:16:55.367 "zcopy": false, 00:16:55.367 "get_zone_info": false, 00:16:55.367 "zone_management": false, 00:16:55.367 "zone_append": false, 00:16:55.367 "compare": false, 00:16:55.367 "compare_and_write": false, 00:16:55.367 "abort": false, 00:16:55.367 "seek_hole": true, 00:16:55.367 "seek_data": true, 00:16:55.367 "copy": false, 00:16:55.367 "nvme_iov_md": false 00:16:55.367 }, 00:16:55.367 "driver_specific": { 00:16:55.367 "lvol": { 00:16:55.367 "lvol_store_uuid": "54248088-ee63-4328-b43c-8387b77b3f4f", 00:16:55.367 "base_bdev": "nvme0n1", 00:16:55.367 "thin_provision": true, 00:16:55.367 "num_allocated_clusters": 0, 00:16:55.367 "snapshot": false, 00:16:55.367 "clone": false, 00:16:55.367 "esnap_clone": false 00:16:55.367 } 00:16:55.367 } 00:16:55.367 } 00:16:55.367 ]' 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:55.367 10:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0de301e0-87ae-4c09-8f6c-4d1e9df82714 -c nvc0n1p0 --l2p_dram_limit 60 00:16:55.628 [2024-10-17 10:15:58.627936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.627987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.628 [2024-10-17 10:15:58.628003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.628 [2024-10-17 10:15:58.628011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.628091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.628102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.628 [2024-10-17 10:15:58.628113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:55.628 [2024-10-17 10:15:58.628123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.628168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.628 [2024-10-17 10:15:58.628864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.628 [2024-10-17 10:15:58.628888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.628897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.628 [2024-10-17 10:15:58.628908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:16:55.628 [2024-10-17 10:15:58.628915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.628983] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4a1199cb-ad0c-4300-81cc-2547cdcb81ab 00:16:55.628 [2024-10-17 10:15:58.630039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.630089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:55.628 [2024-10-17 10:15:58.630100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:55.628 [2024-10-17 10:15:58.630111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.635268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.635300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.628 [2024-10-17 10:15:58.635310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:16:55.628 [2024-10-17 10:15:58.635318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.635412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.635424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.628 [2024-10-17 10:15:58.635435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:55.628 [2024-10-17 10:15:58.635447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.635491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.635502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.628 [2024-10-17 10:15:58.635509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:55.628 [2024-10-17 10:15:58.635517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.635541] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.628 [2024-10-17 10:15:58.639101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.639128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.628 [2024-10-17 10:15:58.639140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:16:55.628 [2024-10-17 10:15:58.639148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.639194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.628 [2024-10-17 10:15:58.639204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.628 [2024-10-17 10:15:58.639214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:55.628 [2024-10-17 10:15:58.639220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.628 [2024-10-17 10:15:58.639261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:55.628 [2024-10-17 10:15:58.639406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.628 [2024-10-17 10:15:58.639423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.628 [2024-10-17 10:15:58.639434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.628 [2024-10-17 10:15:58.639445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.628 [2024-10-17 10:15:58.639454] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.628 [2024-10-17 10:15:58.639463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:55.628 [2024-10-17 10:15:58.639470] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.628 [2024-10-17 10:15:58.639478] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.629 [2024-10-17 10:15:58.639486] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.629 [2024-10-17 10:15:58.639494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.629 [2024-10-17 10:15:58.639501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.629 [2024-10-17 10:15:58.639510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:16:55.629 [2024-10-17 10:15:58.639520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.629 [2024-10-17 10:15:58.639611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.629 [2024-10-17 10:15:58.639624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.629 [2024-10-17 10:15:58.639633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:55.629 [2024-10-17 10:15:58.639640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.629 [2024-10-17 10:15:58.639759] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.629 [2024-10-17 10:15:58.639771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.629 [2024-10-17 10:15:58.639781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.629 [2024-10-17 10:15:58.639806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.629 [2024-10-17 10:15:58.639829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.629 [2024-10-17 10:15:58.639843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.629 [2024-10-17 10:15:58.639849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:55.629 [2024-10-17 10:15:58.639857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.629 [2024-10-17 10:15:58.639864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.629 [2024-10-17 10:15:58.639876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:55.629 [2024-10-17 10:15:58.639882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.629 [2024-10-17 10:15:58.639901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.629 [2024-10-17 10:15:58.639923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.629 [2024-10-17 10:15:58.639944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.629 [2024-10-17 10:15:58.639965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.629 [2024-10-17 10:15:58.639986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:55.629 [2024-10-17 10:15:58.639993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.629 [2024-10-17 10:15:58.639999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.629 [2024-10-17 10:15:58.640009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:55.629 [2024-10-17 10:15:58.640016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.629 [2024-10-17 10:15:58.640024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.629 [2024-10-17 10:15:58.640042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:55.629 [2024-10-17 10:15:58.640067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.629 [2024-10-17 10:15:58.640074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.629 [2024-10-17 10:15:58.640082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:55.629 [2024-10-17 10:15:58.640089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.640096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.629 [2024-10-17 10:15:58.640102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:55.629 [2024-10-17 10:15:58.640111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.640118] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.629 [2024-10-17 10:15:58.640127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.629 [2024-10-17 10:15:58.640140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.629 [2024-10-17 10:15:58.640150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.629 [2024-10-17 10:15:58.640157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.629 [2024-10-17 10:15:58.640167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.629 [2024-10-17 10:15:58.640174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.629 [2024-10-17 10:15:58.640183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.629 [2024-10-17 10:15:58.640189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.629 [2024-10-17 10:15:58.640197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.629 [2024-10-17 10:15:58.640206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.629 [2024-10-17 10:15:58.640217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:55.629 [2024-10-17 10:15:58.640234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:55.629 [2024-10-17 10:15:58.640241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:55.629 [2024-10-17 10:15:58.640249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:55.629 [2024-10-17 10:15:58.640256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:55.629 [2024-10-17 10:15:58.640264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:55.629 [2024-10-17 10:15:58.640271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:55.629 [2024-10-17 10:15:58.640279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:55.629 [2024-10-17 10:15:58.640286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:55.629 [2024-10-17 10:15:58.640296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:55.629 [2024-10-17 10:15:58.640334] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.629 [2024-10-17 10:15:58.640343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.629 [2024-10-17 10:15:58.640359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.629 [2024-10-17 10:15:58.640365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.629 [2024-10-17 10:15:58.640374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.629 [2024-10-17 10:15:58.640381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.629 [2024-10-17 10:15:58.640389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.629 [2024-10-17 10:15:58.640398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:16:55.629 [2024-10-17 10:15:58.640408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.629 [2024-10-17 10:15:58.640459] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:55.629 [2024-10-17 10:15:58.640473] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:59.815 [2024-10-17 10:16:02.182892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.182951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:59.815 [2024-10-17 10:16:02.182965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3542.419 ms 00:16:59.815 [2024-10-17 10:16:02.182978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.208013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.208074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.815 [2024-10-17 10:16:02.208090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.811 ms 00:16:59.815 [2024-10-17 10:16:02.208115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.208244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.208256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.815 [2024-10-17 10:16:02.208265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:59.815 [2024-10-17 10:16:02.208276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.248647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.248695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.815 [2024-10-17 10:16:02.248708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.325 ms 00:16:59.815 [2024-10-17 10:16:02.248718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.248767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.248779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.815 [2024-10-17 10:16:02.248787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.815 [2024-10-17 10:16:02.248797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.249190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.249211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.815 [2024-10-17 10:16:02.249220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:16:59.815 [2024-10-17 10:16:02.249230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.249367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.249385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.815 [2024-10-17 10:16:02.249393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:59.815 [2024-10-17 10:16:02.249404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.264009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.264062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.815 [2024-10-17 10:16:02.264073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.586 ms 00:16:59.815 [2024-10-17 10:16:02.264083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.275688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.815 [2024-10-17 10:16:02.289865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.289909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.815 [2024-10-17 10:16:02.289922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:16:59.815 [2024-10-17 10:16:02.289929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.341546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.341696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:59.815 [2024-10-17 10:16:02.341718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.581 ms 00:16:59.815 [2024-10-17 10:16:02.341727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.341910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.341927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.815 [2024-10-17 10:16:02.341940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:59.815 [2024-10-17 10:16:02.341947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.364852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.364979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:59.815 [2024-10-17 10:16:02.364998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.857 ms 00:16:59.815 [2024-10-17 10:16:02.365009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.388002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.388147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:59.815 [2024-10-17 10:16:02.388167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.928 ms 00:16:59.815 [2024-10-17 10:16:02.388174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.388732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.388748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.815 [2024-10-17 10:16:02.388760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:16:59.815 [2024-10-17 10:16:02.388768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.815 [2024-10-17 10:16:02.459655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.815 [2024-10-17 10:16:02.459811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:59.815 [2024-10-17 10:16:02.459834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.844 ms 00:16:59.816 [2024-10-17 10:16:02.459841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.484199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.816 [2024-10-17 10:16:02.484233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:59.816 [2024-10-17 10:16:02.484246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.264 ms 00:16:59.816 [2024-10-17 10:16:02.484254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.507173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.816 [2024-10-17 10:16:02.507207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:59.816 [2024-10-17 10:16:02.507220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.874 ms 00:16:59.816 [2024-10-17 10:16:02.507228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.530310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.816 [2024-10-17 10:16:02.530427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.816 [2024-10-17 10:16:02.530447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.039 ms 00:16:59.816 [2024-10-17 10:16:02.530454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.530515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.816 [2024-10-17 10:16:02.530525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.816 [2024-10-17 10:16:02.530537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.816 [2024-10-17 10:16:02.530545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.530635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.816 [2024-10-17 10:16:02.530645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.816 [2024-10-17 10:16:02.530654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:59.816 [2024-10-17 10:16:02.530662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.816 [2024-10-17 10:16:02.531606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3903.243 ms, result 0 00:16:59.816 { 00:16:59.816 "name": "ftl0", 00:16:59.816 "uuid": "4a1199cb-ad0c-4300-81cc-2547cdcb81ab" 00:16:59.816 } 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:59.816 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:00.074 [ 00:17:00.074 { 00:17:00.074 "name": "ftl0", 00:17:00.074 "aliases": [ 00:17:00.074 "4a1199cb-ad0c-4300-81cc-2547cdcb81ab" 00:17:00.074 ], 00:17:00.074 "product_name": "FTL disk", 00:17:00.074 "block_size": 4096, 00:17:00.074 "num_blocks": 20971520, 00:17:00.074 "uuid": "4a1199cb-ad0c-4300-81cc-2547cdcb81ab", 00:17:00.074 "assigned_rate_limits": { 00:17:00.074 "rw_ios_per_sec": 0, 00:17:00.074 "rw_mbytes_per_sec": 0, 00:17:00.074 "r_mbytes_per_sec": 0, 00:17:00.074 "w_mbytes_per_sec": 0 00:17:00.074 }, 00:17:00.074 "claimed": false, 00:17:00.074 "zoned": false, 00:17:00.074 "supported_io_types": { 00:17:00.074 "read": true, 00:17:00.074 "write": true, 00:17:00.074 "unmap": true, 00:17:00.074 "flush": true, 00:17:00.074 "reset": false, 00:17:00.074 "nvme_admin": false, 00:17:00.074 "nvme_io": false, 00:17:00.074 "nvme_io_md": false, 00:17:00.074 "write_zeroes": true, 00:17:00.074 "zcopy": false, 00:17:00.074 "get_zone_info": false, 00:17:00.074 "zone_management": false, 00:17:00.074 "zone_append": false, 00:17:00.074 "compare": false, 00:17:00.074 "compare_and_write": false, 00:17:00.074 "abort": false, 00:17:00.074 "seek_hole": false, 00:17:00.074 "seek_data": false, 00:17:00.074 "copy": false, 00:17:00.074 "nvme_iov_md": false 00:17:00.074 }, 00:17:00.074 "driver_specific": { 00:17:00.074 "ftl": { 00:17:00.074 "base_bdev": "0de301e0-87ae-4c09-8f6c-4d1e9df82714", 00:17:00.074 "cache": "nvc0n1p0" 00:17:00.074 } 00:17:00.074 } 00:17:00.074 } 00:17:00.074 ] 00:17:00.074 10:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:00.074 10:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:00.074 10:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:00.074 10:16:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:00.332 10:16:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:00.332 [2024-10-17 10:16:03.352384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.352434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.332 [2024-10-17 10:16:03.352447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.332 [2024-10-17 10:16:03.352457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.352499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.332 [2024-10-17 10:16:03.355132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.355164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.332 [2024-10-17 10:16:03.355176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:17:00.332 [2024-10-17 10:16:03.355185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.355596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.355613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.332 [2024-10-17 10:16:03.355623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:17:00.332 [2024-10-17 10:16:03.355631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.358873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.358895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.332 [2024-10-17 10:16:03.358907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:17:00.332 [2024-10-17 10:16:03.358920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.365036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.365070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.332 [2024-10-17 10:16:03.365082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:17:00.332 [2024-10-17 10:16:03.365090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.388458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.388597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.332 [2024-10-17 10:16:03.388617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.268 ms 00:17:00.332 [2024-10-17 10:16:03.388624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.403742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.403871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.332 [2024-10-17 10:16:03.403891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.060 ms 00:17:00.332 [2024-10-17 10:16:03.403899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.332 [2024-10-17 10:16:03.404085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.332 [2024-10-17 10:16:03.404099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.332 [2024-10-17 10:16:03.404110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:00.332 [2024-10-17 10:16:03.404117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.592 [2024-10-17 10:16:03.427216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.592 [2024-10-17 10:16:03.427259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:00.592 [2024-10-17 10:16:03.427272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:17:00.592 [2024-10-17 10:16:03.427279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.592 [2024-10-17 10:16:03.449896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.592 [2024-10-17 10:16:03.449927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:00.592 [2024-10-17 10:16:03.449939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.573 ms 00:17:00.592 [2024-10-17 10:16:03.449946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.592 [2024-10-17 10:16:03.472246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.592 [2024-10-17 10:16:03.472277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.592 [2024-10-17 10:16:03.472289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:17:00.592 [2024-10-17 10:16:03.472296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.592 [2024-10-17 10:16:03.494705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.592 [2024-10-17 10:16:03.494870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.592 [2024-10-17 10:16:03.494889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.305 ms 00:17:00.592 [2024-10-17 10:16:03.494896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.592 [2024-10-17 10:16:03.494934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.592 [2024-10-17 10:16:03.494947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.494999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.592 [2024-10-17 10:16:03.495249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.593 [2024-10-17 10:16:03.495823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.593 [2024-10-17 10:16:03.495832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1199cb-ad0c-4300-81cc-2547cdcb81ab 00:17:00.593 [2024-10-17 10:16:03.495840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.593 [2024-10-17 10:16:03.495850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.593 [2024-10-17 10:16:03.495856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.593 [2024-10-17 10:16:03.495865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.593 [2024-10-17 10:16:03.495872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.593 [2024-10-17 10:16:03.495882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.593 [2024-10-17 10:16:03.495891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.593 [2024-10-17 10:16:03.495899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.593 [2024-10-17 10:16:03.495905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.593 [2024-10-17 10:16:03.495914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.593 [2024-10-17 10:16:03.495921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.593 [2024-10-17 10:16:03.495931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:00.593 [2024-10-17 10:16:03.495938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.508190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.593 [2024-10-17 10:16:03.508219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.593 [2024-10-17 10:16:03.508232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.214 ms 00:17:00.593 [2024-10-17 10:16:03.508242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.508599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.593 [2024-10-17 10:16:03.508608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.593 [2024-10-17 10:16:03.508618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:00.593 [2024-10-17 10:16:03.508625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.552181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.593 [2024-10-17 10:16:03.552215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.593 [2024-10-17 10:16:03.552228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.593 [2024-10-17 10:16:03.552237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.552296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.593 [2024-10-17 10:16:03.552305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.593 [2024-10-17 10:16:03.552314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.593 [2024-10-17 10:16:03.552321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.552397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.593 [2024-10-17 10:16:03.552408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.593 [2024-10-17 10:16:03.552417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.593 [2024-10-17 10:16:03.552424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.593 [2024-10-17 10:16:03.552456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.593 [2024-10-17 10:16:03.552464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.594 [2024-10-17 10:16:03.552473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.594 [2024-10-17 10:16:03.552480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.594 [2024-10-17 10:16:03.632479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.594 [2024-10-17 10:16:03.632523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.594 [2024-10-17 10:16:03.632536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.594 [2024-10-17 10:16:03.632547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.852 [2024-10-17 10:16:03.694421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.852 [2024-10-17 10:16:03.694533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.852 [2024-10-17 10:16:03.694623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.852 [2024-10-17 10:16:03.694749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.852 [2024-10-17 10:16:03.694832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.852 [2024-10-17 10:16:03.694898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.694955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.852 [2024-10-17 10:16:03.694965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.852 [2024-10-17 10:16:03.694975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.852 [2024-10-17 10:16:03.694983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.852 [2024-10-17 10:16:03.695156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.753 ms, result 0 00:17:00.852 true 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72517 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72517 ']' 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72517 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72517 00:17:00.852 killing process with pid 72517 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72517' 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72517 00:17:00.852 10:16:03 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72517 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:04.130 10:16:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:04.130 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:04.130 fio-3.35 00:17:04.130 Starting 1 thread 00:17:08.315 00:17:08.315 test: (groupid=0, jobs=1): err= 0: pid=72712: Thu Oct 17 10:16:10 2024 00:17:08.315 read: IOPS=1343, BW=89.2MiB/s (93.5MB/s)(255MiB/2854msec) 00:17:08.315 slat (nsec): min=3040, max=21425, avg=4533.98, stdev=1888.42 00:17:08.315 clat (usec): min=250, max=865, avg=333.00, stdev=42.18 00:17:08.315 lat (usec): min=254, max=869, avg=337.53, stdev=43.02 00:17:08.315 clat percentiles (usec): 00:17:08.315 | 1.00th=[ 293], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:17:08.315 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 322], 00:17:08.315 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 424], 00:17:08.315 | 99.00th=[ 515], 99.50th=[ 562], 99.90th=[ 725], 99.95th=[ 783], 00:17:08.315 | 99.99th=[ 865] 00:17:08.315 write: IOPS=1352, BW=89.8MiB/s (94.2MB/s)(256MiB/2851msec); 0 zone resets 00:17:08.315 slat (nsec): min=13728, max=72338, avg=19725.66, stdev=3460.02 00:17:08.315 clat (usec): min=286, max=1077, avg=371.34, stdev=71.19 00:17:08.315 lat (usec): min=311, max=1150, avg=391.06, stdev=71.81 00:17:08.315 clat percentiles (usec): 00:17:08.315 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:17:08.315 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 351], 00:17:08.315 | 70.00th=[ 363], 80.00th=[ 404], 90.00th=[ 416], 95.00th=[ 445], 00:17:08.315 | 99.00th=[ 717], 99.50th=[ 799], 99.90th=[ 1057], 99.95th=[ 1074], 00:17:08.315 | 99.99th=[ 1074] 00:17:08.315 bw ( KiB/s): min=89216, max=93840, per=99.75%, avg=91745.60, stdev=1713.82, samples=5 00:17:08.315 iops : min= 1312, max= 1380, avg=1349.20, stdev=25.20, samples=5 00:17:08.315 lat (usec) : 500=97.48%, 750=2.11%, 1000=0.35% 00:17:08.315 lat (msec) : 2=0.07% 00:17:08.315 cpu : usr=99.26%, sys=0.07%, ctx=6, majf=0, minf=1169 00:17:08.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.315 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.315 00:17:08.315 Run status group 0 (all jobs): 00:17:08.315 READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=255MiB (267MB), run=2854-2854msec 00:17:08.315 WRITE: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=256MiB (269MB), run=2851-2851msec 00:17:09.250 ----------------------------------------------------- 00:17:09.250 Suppressions used: 00:17:09.250 count bytes template 00:17:09.250 1 5 /usr/src/fio/parse.c 00:17:09.250 1 8 libtcmalloc_minimal.so 00:17:09.250 1 904 libcrypto.so 00:17:09.250 ----------------------------------------------------- 00:17:09.250 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:09.509 10:16:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:09.509 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:09.509 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:09.509 fio-3.35 00:17:09.509 Starting 2 threads 00:17:36.053 00:17:36.053 first_half: (groupid=0, jobs=1): err= 0: pid=72798: Thu Oct 17 10:16:35 2024 00:17:36.053 read: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(255MiB/22181msec) 00:17:36.053 slat (nsec): min=3008, max=22508, avg=3729.28, stdev=682.32 00:17:36.053 clat (usec): min=588, max=263073, avg=33217.90, stdev=17465.54 00:17:36.053 lat (usec): min=592, max=263078, avg=33221.63, stdev=17465.55 00:17:36.053 clat percentiles (msec): 00:17:36.053 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:17:36.053 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:17:36.053 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 42], 00:17:36.053 | 99.00th=[ 138], 99.50th=[ 159], 99.90th=[ 218], 99.95th=[ 232], 00:17:36.053 | 99.99th=[ 257] 00:17:36.053 write: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(256MiB/15462msec); 0 zone resets 00:17:36.053 slat (usec): min=3, max=367, avg= 5.46, stdev= 3.34 00:17:36.053 clat (usec): min=351, max=80840, avg=10262.17, stdev=18551.77 00:17:36.053 lat (usec): min=360, max=80845, avg=10267.64, stdev=18551.77 00:17:36.053 clat percentiles (usec): 00:17:36.053 | 1.00th=[ 660], 5.00th=[ 816], 10.00th=[ 955], 20.00th=[ 1123], 00:17:36.053 | 30.00th=[ 1319], 40.00th=[ 1860], 50.00th=[ 3851], 60.00th=[ 5014], 00:17:36.053 | 70.00th=[ 5800], 80.00th=[ 9372], 90.00th=[54264], 95.00th=[63177], 00:17:36.053 | 99.00th=[71828], 99.50th=[73925], 99.90th=[78119], 99.95th=[79168], 00:17:36.053 | 99.99th=[80217] 00:17:36.053 bw ( KiB/s): min= 104, max=51336, per=100.00%, avg=26214.40, stdev=16361.68, samples=20 00:17:36.054 iops : min= 26, max=12834, avg=6553.60, stdev=4090.42, samples=20 00:17:36.054 lat (usec) : 500=0.03%, 750=1.59%, 1000=4.61% 00:17:36.054 lat (msec) : 2=14.58%, 4=5.03%, 10=15.50%, 20=4.67%, 50=47.02% 00:17:36.054 lat (msec) : 100=6.09%, 250=0.87%, 500=0.01% 00:17:36.054 cpu : usr=99.44%, sys=0.11%, ctx=32, majf=0, minf=5543 00:17:36.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:36.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.054 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.054 second_half: (groupid=0, jobs=1): err= 0: pid=72799: Thu Oct 17 10:16:35 2024 00:17:36.054 read: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(255MiB/22332msec) 00:17:36.054 slat (nsec): min=2996, max=45551, avg=3800.27, stdev=809.28 00:17:36.054 clat (usec): min=584, max=265720, avg=32018.26, stdev=14844.68 00:17:36.054 lat (usec): min=587, max=265725, avg=32022.06, stdev=14844.74 00:17:36.054 clat percentiles (msec): 00:17:36.054 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 30], 00:17:36.054 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:17:36.054 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 41], 00:17:36.054 | 99.00th=[ 116], 99.50th=[ 140], 99.90th=[ 165], 99.95th=[ 186], 00:17:36.054 | 99.99th=[ 262] 00:17:36.054 write: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(256MiB/20264msec); 0 zone resets 00:17:36.054 slat (usec): min=3, max=320, avg= 5.45, stdev= 2.77 00:17:36.054 clat (usec): min=372, max=80930, avg=11691.77, stdev=18942.62 00:17:36.054 lat (usec): min=379, max=80935, avg=11697.22, stdev=18942.65 00:17:36.054 clat percentiles (usec): 00:17:36.054 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 914], 20.00th=[ 1123], 00:17:36.054 | 30.00th=[ 1778], 40.00th=[ 3752], 50.00th=[ 5145], 60.00th=[ 6194], 00:17:36.054 | 70.00th=[ 7963], 80.00th=[11076], 90.00th=[56361], 95.00th=[64226], 00:17:36.054 | 99.00th=[72877], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:17:36.054 | 99.99th=[80217] 00:17:36.054 bw ( KiB/s): min= 32, max=40929, per=77.95%, avg=20168.04, stdev=14116.15, samples=26 00:17:36.054 iops : min= 8, max=10232, avg=5042.00, stdev=3529.02, samples=26 00:17:36.054 lat (usec) : 500=0.02%, 750=2.25%, 1000=4.75% 00:17:36.054 lat (msec) : 2=8.74%, 4=5.18%, 10=19.56%, 20=5.50%, 50=47.10% 00:17:36.054 lat (msec) : 100=6.24%, 250=0.66%, 500=0.01% 00:17:36.054 cpu : usr=99.22%, sys=0.16%, ctx=32, majf=0, minf=5558 00:17:36.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:36.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.054 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.054 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.054 00:17:36.054 Run status group 0 (all jobs): 00:17:36.054 READ: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=510MiB (535MB), run=22181-22332msec 00:17:36.054 WRITE: bw=25.3MiB/s (26.5MB/s), 12.6MiB/s-16.6MiB/s (13.2MB/s-17.4MB/s), io=512MiB (537MB), run=15462-20264msec 00:17:36.054 ----------------------------------------------------- 00:17:36.054 Suppressions used: 00:17:36.054 count bytes template 00:17:36.054 2 10 /usr/src/fio/parse.c 00:17:36.054 2 192 /usr/src/fio/iolog.c 00:17:36.054 1 8 libtcmalloc_minimal.so 00:17:36.054 1 904 libcrypto.so 00:17:36.054 ----------------------------------------------------- 00:17:36.054 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:36.054 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:36.054 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:36.054 fio-3.35 00:17:36.054 Starting 1 thread 00:17:48.253 00:17:48.253 test: (groupid=0, jobs=1): err= 0: pid=73097: Thu Oct 17 10:16:51 2024 00:17:48.253 read: IOPS=8453, BW=33.0MiB/s (34.6MB/s)(255MiB/7713msec) 00:17:48.253 slat (nsec): min=3070, max=20010, avg=3519.97, stdev=620.47 00:17:48.253 clat (usec): min=500, max=31596, avg=15133.19, stdev=1351.44 00:17:48.253 lat (usec): min=504, max=31600, avg=15136.71, stdev=1351.46 00:17:48.253 clat percentiles (usec): 00:17:48.253 | 1.00th=[13042], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:17:48.253 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:17:48.253 | 70.00th=[15533], 80.00th=[15795], 90.00th=[15926], 95.00th=[16909], 00:17:48.253 | 99.00th=[20579], 99.50th=[23200], 99.90th=[24511], 99.95th=[27657], 00:17:48.253 | 99.99th=[30802] 00:17:48.253 write: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(256MiB/4042msec); 0 zone resets 00:17:48.253 slat (usec): min=4, max=108, avg= 6.04, stdev= 2.17 00:17:48.253 clat (usec): min=465, max=45563, avg=7852.59, stdev=9584.73 00:17:48.253 lat (usec): min=471, max=45569, avg=7858.63, stdev=9584.68 00:17:48.253 clat percentiles (usec): 00:17:48.253 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 832], 20.00th=[ 971], 00:17:48.253 | 30.00th=[ 1090], 40.00th=[ 1401], 50.00th=[ 5407], 60.00th=[ 6259], 00:17:48.253 | 70.00th=[ 7242], 80.00th=[ 8848], 90.00th=[28181], 95.00th=[29754], 00:17:48.253 | 99.00th=[31589], 99.50th=[32637], 99.90th=[39060], 99.95th=[39584], 00:17:48.253 | 99.99th=[44303] 00:17:48.253 bw ( KiB/s): min= 3472, max=87184, per=89.82%, avg=58254.22, stdev=22897.55, samples=9 00:17:48.253 iops : min= 868, max=21796, avg=14563.56, stdev=5724.39, samples=9 00:17:48.253 lat (usec) : 500=0.01%, 750=3.19%, 1000=8.28% 00:17:48.253 lat (msec) : 2=9.17%, 4=0.57%, 10=20.20%, 20=49.98%, 50=8.61% 00:17:48.253 cpu : usr=99.17%, sys=0.17%, ctx=28, majf=0, minf=5565 00:17:48.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:48.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.253 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.253 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.253 00:17:48.253 Run status group 0 (all jobs): 00:17:48.253 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=255MiB (267MB), run=7713-7713msec 00:17:48.253 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=256MiB (268MB), run=4042-4042msec 00:17:49.629 ----------------------------------------------------- 00:17:49.629 Suppressions used: 00:17:49.629 count bytes template 00:17:49.629 1 5 /usr/src/fio/parse.c 00:17:49.629 2 192 /usr/src/fio/iolog.c 00:17:49.629 1 8 libtcmalloc_minimal.so 00:17:49.629 1 904 libcrypto.so 00:17:49.629 ----------------------------------------------------- 00:17:49.629 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.629 Remove shared memory files 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57204 /dev/shm/spdk_tgt_trace.pid71434 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:49.629 00:17:49.629 ************************************ 00:17:49.629 END TEST ftl_fio_basic 00:17:49.629 ************************************ 00:17:49.629 real 0m57.659s 00:17:49.629 user 2m5.932s 00:17:49.629 sys 0m2.495s 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.629 10:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:49.629 10:16:52 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:49.629 10:16:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:49.629 10:16:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:49.629 10:16:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:49.629 ************************************ 00:17:49.629 START TEST ftl_bdevperf 00:17:49.629 ************************************ 00:17:49.629 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:49.889 * Looking for test storage... 00:17:49.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.889 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.890 --rc genhtml_branch_coverage=1 00:17:49.890 --rc genhtml_function_coverage=1 00:17:49.890 --rc genhtml_legend=1 00:17:49.890 --rc geninfo_all_blocks=1 00:17:49.890 --rc geninfo_unexecuted_blocks=1 00:17:49.890 00:17:49.890 ' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.890 --rc genhtml_branch_coverage=1 00:17:49.890 --rc genhtml_function_coverage=1 00:17:49.890 --rc genhtml_legend=1 00:17:49.890 --rc geninfo_all_blocks=1 00:17:49.890 --rc geninfo_unexecuted_blocks=1 00:17:49.890 00:17:49.890 ' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.890 --rc genhtml_branch_coverage=1 00:17:49.890 --rc genhtml_function_coverage=1 00:17:49.890 --rc genhtml_legend=1 00:17:49.890 --rc geninfo_all_blocks=1 00:17:49.890 --rc geninfo_unexecuted_blocks=1 00:17:49.890 00:17:49.890 ' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:49.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.890 --rc genhtml_branch_coverage=1 00:17:49.890 --rc genhtml_function_coverage=1 00:17:49.890 --rc genhtml_legend=1 00:17:49.890 --rc geninfo_all_blocks=1 00:17:49.890 --rc geninfo_unexecuted_blocks=1 00:17:49.890 00:17:49.890 ' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73331 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73331 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73331 ']' 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:49.890 10:16:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.890 [2024-10-17 10:16:52.911980] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:17:49.890 [2024-10-17 10:16:52.912247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73331 ] 00:17:50.149 [2024-10-17 10:16:53.063160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.149 [2024-10-17 10:16:53.160577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:50.715 10:16:53 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:50.974 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:51.233 { 00:17:51.233 "name": "nvme0n1", 00:17:51.233 "aliases": [ 00:17:51.233 "ab6df606-e22c-4b38-bcbe-ce527c80e473" 00:17:51.233 ], 00:17:51.233 "product_name": "NVMe disk", 00:17:51.233 "block_size": 4096, 00:17:51.233 "num_blocks": 1310720, 00:17:51.233 "uuid": "ab6df606-e22c-4b38-bcbe-ce527c80e473", 00:17:51.233 "numa_id": -1, 00:17:51.233 "assigned_rate_limits": { 00:17:51.233 "rw_ios_per_sec": 0, 00:17:51.233 "rw_mbytes_per_sec": 0, 00:17:51.233 "r_mbytes_per_sec": 0, 00:17:51.233 "w_mbytes_per_sec": 0 00:17:51.233 }, 00:17:51.233 "claimed": true, 00:17:51.233 "claim_type": "read_many_write_one", 00:17:51.233 "zoned": false, 00:17:51.233 "supported_io_types": { 00:17:51.233 "read": true, 00:17:51.233 "write": true, 00:17:51.233 "unmap": true, 00:17:51.233 "flush": true, 00:17:51.233 "reset": true, 00:17:51.233 "nvme_admin": true, 00:17:51.233 "nvme_io": true, 00:17:51.233 "nvme_io_md": false, 00:17:51.233 "write_zeroes": true, 00:17:51.233 "zcopy": false, 00:17:51.233 "get_zone_info": false, 00:17:51.233 "zone_management": false, 00:17:51.233 "zone_append": false, 00:17:51.233 "compare": true, 00:17:51.233 "compare_and_write": false, 00:17:51.233 "abort": true, 00:17:51.233 "seek_hole": false, 00:17:51.233 "seek_data": false, 00:17:51.233 "copy": true, 00:17:51.233 "nvme_iov_md": false 00:17:51.233 }, 00:17:51.233 "driver_specific": { 00:17:51.233 "nvme": [ 00:17:51.233 { 00:17:51.233 "pci_address": "0000:00:11.0", 00:17:51.233 "trid": { 00:17:51.233 "trtype": "PCIe", 00:17:51.233 "traddr": "0000:00:11.0" 00:17:51.233 }, 00:17:51.233 "ctrlr_data": { 00:17:51.233 "cntlid": 0, 00:17:51.233 "vendor_id": "0x1b36", 00:17:51.233 "model_number": "QEMU NVMe Ctrl", 00:17:51.233 "serial_number": "12341", 00:17:51.233 "firmware_revision": "8.0.0", 00:17:51.233 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:51.233 "oacs": { 00:17:51.233 "security": 0, 00:17:51.233 "format": 1, 00:17:51.233 "firmware": 0, 00:17:51.233 "ns_manage": 1 00:17:51.233 }, 00:17:51.233 "multi_ctrlr": false, 00:17:51.233 "ana_reporting": false 00:17:51.233 }, 00:17:51.233 "vs": { 00:17:51.233 "nvme_version": "1.4" 00:17:51.233 }, 00:17:51.233 "ns_data": { 00:17:51.233 "id": 1, 00:17:51.233 "can_share": false 00:17:51.233 } 00:17:51.233 } 00:17:51.233 ], 00:17:51.233 "mp_policy": "active_passive" 00:17:51.233 } 00:17:51.233 } 00:17:51.233 ]' 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:51.233 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:51.492 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:51.492 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:51.492 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=54248088-ee63-4328-b43c-8387b77b3f4f 00:17:51.492 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:51.492 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54248088-ee63-4328-b43c-8387b77b3f4f 00:17:51.750 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:52.008 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=65463d56-515a-46a1-af7c-48da07e2ad34 00:17:52.008 10:16:54 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 65463d56-515a-46a1-af7c-48da07e2ad34 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:52.266 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:52.525 { 00:17:52.525 "name": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:52.525 "aliases": [ 00:17:52.525 "lvs/nvme0n1p0" 00:17:52.525 ], 00:17:52.525 "product_name": "Logical Volume", 00:17:52.525 "block_size": 4096, 00:17:52.525 "num_blocks": 26476544, 00:17:52.525 "uuid": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:52.525 "assigned_rate_limits": { 00:17:52.525 "rw_ios_per_sec": 0, 00:17:52.525 "rw_mbytes_per_sec": 0, 00:17:52.525 "r_mbytes_per_sec": 0, 00:17:52.525 "w_mbytes_per_sec": 0 00:17:52.525 }, 00:17:52.525 "claimed": false, 00:17:52.525 "zoned": false, 00:17:52.525 "supported_io_types": { 00:17:52.525 "read": true, 00:17:52.525 "write": true, 00:17:52.525 "unmap": true, 00:17:52.525 "flush": false, 00:17:52.525 "reset": true, 00:17:52.525 "nvme_admin": false, 00:17:52.525 "nvme_io": false, 00:17:52.525 "nvme_io_md": false, 00:17:52.525 "write_zeroes": true, 00:17:52.525 "zcopy": false, 00:17:52.525 "get_zone_info": false, 00:17:52.525 "zone_management": false, 00:17:52.525 "zone_append": false, 00:17:52.525 "compare": false, 00:17:52.525 "compare_and_write": false, 00:17:52.525 "abort": false, 00:17:52.525 "seek_hole": true, 00:17:52.525 "seek_data": true, 00:17:52.525 "copy": false, 00:17:52.525 "nvme_iov_md": false 00:17:52.525 }, 00:17:52.525 "driver_specific": { 00:17:52.525 "lvol": { 00:17:52.525 "lvol_store_uuid": "65463d56-515a-46a1-af7c-48da07e2ad34", 00:17:52.525 "base_bdev": "nvme0n1", 00:17:52.525 "thin_provision": true, 00:17:52.525 "num_allocated_clusters": 0, 00:17:52.525 "snapshot": false, 00:17:52.525 "clone": false, 00:17:52.525 "esnap_clone": false 00:17:52.525 } 00:17:52.525 } 00:17:52.525 } 00:17:52.525 ]' 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:52.525 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa283e49-fd9f-4582-a86b-920baae1be61 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:52.784 { 00:17:52.784 "name": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:52.784 "aliases": [ 00:17:52.784 "lvs/nvme0n1p0" 00:17:52.784 ], 00:17:52.784 "product_name": "Logical Volume", 00:17:52.784 "block_size": 4096, 00:17:52.784 "num_blocks": 26476544, 00:17:52.784 "uuid": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:52.784 "assigned_rate_limits": { 00:17:52.784 "rw_ios_per_sec": 0, 00:17:52.784 "rw_mbytes_per_sec": 0, 00:17:52.784 "r_mbytes_per_sec": 0, 00:17:52.784 "w_mbytes_per_sec": 0 00:17:52.784 }, 00:17:52.784 "claimed": false, 00:17:52.784 "zoned": false, 00:17:52.784 "supported_io_types": { 00:17:52.784 "read": true, 00:17:52.784 "write": true, 00:17:52.784 "unmap": true, 00:17:52.784 "flush": false, 00:17:52.784 "reset": true, 00:17:52.784 "nvme_admin": false, 00:17:52.784 "nvme_io": false, 00:17:52.784 "nvme_io_md": false, 00:17:52.784 "write_zeroes": true, 00:17:52.784 "zcopy": false, 00:17:52.784 "get_zone_info": false, 00:17:52.784 "zone_management": false, 00:17:52.784 "zone_append": false, 00:17:52.784 "compare": false, 00:17:52.784 "compare_and_write": false, 00:17:52.784 "abort": false, 00:17:52.784 "seek_hole": true, 00:17:52.784 "seek_data": true, 00:17:52.784 "copy": false, 00:17:52.784 "nvme_iov_md": false 00:17:52.784 }, 00:17:52.784 "driver_specific": { 00:17:52.784 "lvol": { 00:17:52.784 "lvol_store_uuid": "65463d56-515a-46a1-af7c-48da07e2ad34", 00:17:52.784 "base_bdev": "nvme0n1", 00:17:52.784 "thin_provision": true, 00:17:52.784 "num_allocated_clusters": 0, 00:17:52.784 "snapshot": false, 00:17:52.784 "clone": false, 00:17:52.784 "esnap_clone": false 00:17:52.784 } 00:17:52.784 } 00:17:52.784 } 00:17:52.784 ]' 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:52.784 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:53.043 10:16:55 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size aa283e49-fd9f-4582-a86b-920baae1be61 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=aa283e49-fd9f-4582-a86b-920baae1be61 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:53.043 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa283e49-fd9f-4582-a86b-920baae1be61 00:17:53.301 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:53.301 { 00:17:53.301 "name": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:53.301 "aliases": [ 00:17:53.301 "lvs/nvme0n1p0" 00:17:53.301 ], 00:17:53.301 "product_name": "Logical Volume", 00:17:53.301 "block_size": 4096, 00:17:53.301 "num_blocks": 26476544, 00:17:53.301 "uuid": "aa283e49-fd9f-4582-a86b-920baae1be61", 00:17:53.301 "assigned_rate_limits": { 00:17:53.301 "rw_ios_per_sec": 0, 00:17:53.301 "rw_mbytes_per_sec": 0, 00:17:53.301 "r_mbytes_per_sec": 0, 00:17:53.301 "w_mbytes_per_sec": 0 00:17:53.301 }, 00:17:53.301 "claimed": false, 00:17:53.301 "zoned": false, 00:17:53.301 "supported_io_types": { 00:17:53.301 "read": true, 00:17:53.301 "write": true, 00:17:53.301 "unmap": true, 00:17:53.301 "flush": false, 00:17:53.301 "reset": true, 00:17:53.301 "nvme_admin": false, 00:17:53.301 "nvme_io": false, 00:17:53.301 "nvme_io_md": false, 00:17:53.301 "write_zeroes": true, 00:17:53.301 "zcopy": false, 00:17:53.301 "get_zone_info": false, 00:17:53.301 "zone_management": false, 00:17:53.301 "zone_append": false, 00:17:53.301 "compare": false, 00:17:53.301 "compare_and_write": false, 00:17:53.301 "abort": false, 00:17:53.301 "seek_hole": true, 00:17:53.301 "seek_data": true, 00:17:53.301 "copy": false, 00:17:53.301 "nvme_iov_md": false 00:17:53.301 }, 00:17:53.301 "driver_specific": { 00:17:53.301 "lvol": { 00:17:53.302 "lvol_store_uuid": "65463d56-515a-46a1-af7c-48da07e2ad34", 00:17:53.302 "base_bdev": "nvme0n1", 00:17:53.302 "thin_provision": true, 00:17:53.302 "num_allocated_clusters": 0, 00:17:53.302 "snapshot": false, 00:17:53.302 "clone": false, 00:17:53.302 "esnap_clone": false 00:17:53.302 } 00:17:53.302 } 00:17:53.302 } 00:17:53.302 ]' 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:53.302 10:16:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aa283e49-fd9f-4582-a86b-920baae1be61 -c nvc0n1p0 --l2p_dram_limit 20 00:17:53.561 [2024-10-17 10:16:56.555599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.561 [2024-10-17 10:16:56.555757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:53.561 [2024-10-17 10:16:56.555775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.561 [2024-10-17 10:16:56.555783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.561 [2024-10-17 10:16:56.555834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.561 [2024-10-17 10:16:56.555845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.561 [2024-10-17 10:16:56.555852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:53.561 [2024-10-17 10:16:56.555877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.561 [2024-10-17 10:16:56.555891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:53.562 [2024-10-17 10:16:56.556508] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:53.562 [2024-10-17 10:16:56.556522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.556533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.562 [2024-10-17 10:16:56.556540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:17:53.562 [2024-10-17 10:16:56.556547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.556598] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4bbd3eb7-b9c2-4ea4-a746-14a88bd870fe 00:17:53.562 [2024-10-17 10:16:56.557670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.557694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:53.562 [2024-10-17 10:16:56.557704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:53.562 [2024-10-17 10:16:56.557713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.562663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.562765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.562 [2024-10-17 10:16:56.562780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:17:53.562 [2024-10-17 10:16:56.562786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.562860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.562868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.562 [2024-10-17 10:16:56.562880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:53.562 [2024-10-17 10:16:56.562886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.562921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.562928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:53.562 [2024-10-17 10:16:56.562936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:53.562 [2024-10-17 10:16:56.562941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.562959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:53.562 [2024-10-17 10:16:56.565911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.566001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.562 [2024-10-17 10:16:56.566013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:17:53.562 [2024-10-17 10:16:56.566022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.566064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.566073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:53.562 [2024-10-17 10:16:56.566082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:53.562 [2024-10-17 10:16:56.566098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.566118] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:53.562 [2024-10-17 10:16:56.566228] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:53.562 [2024-10-17 10:16:56.566237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:53.562 [2024-10-17 10:16:56.566249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:53.562 [2024-10-17 10:16:56.566257] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566272] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:53.562 [2024-10-17 10:16:56.566279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:53.562 [2024-10-17 10:16:56.566284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:53.562 [2024-10-17 10:16:56.566291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:53.562 [2024-10-17 10:16:56.566297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.566304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:53.562 [2024-10-17 10:16:56.566310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:53.562 [2024-10-17 10:16:56.566318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.566382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.562 [2024-10-17 10:16:56.566391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:53.562 [2024-10-17 10:16:56.566397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:53.562 [2024-10-17 10:16:56.566405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.562 [2024-10-17 10:16:56.566474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:53.562 [2024-10-17 10:16:56.566482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:53.562 [2024-10-17 10:16:56.566488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:53.562 [2024-10-17 10:16:56.566508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:53.562 [2024-10-17 10:16:56.566525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.562 [2024-10-17 10:16:56.566536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:53.562 [2024-10-17 10:16:56.566542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:53.562 [2024-10-17 10:16:56.566548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.562 [2024-10-17 10:16:56.566559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:53.562 [2024-10-17 10:16:56.566564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:53.562 [2024-10-17 10:16:56.566572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:53.562 [2024-10-17 10:16:56.566586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:53.562 [2024-10-17 10:16:56.566603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:53.562 [2024-10-17 10:16:56.566623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:53.562 [2024-10-17 10:16:56.566639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:53.562 [2024-10-17 10:16:56.566657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:53.562 [2024-10-17 10:16:56.566674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.562 [2024-10-17 10:16:56.566686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:53.562 [2024-10-17 10:16:56.566692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:53.562 [2024-10-17 10:16:56.566697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.562 [2024-10-17 10:16:56.566703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:53.562 [2024-10-17 10:16:56.566708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:53.562 [2024-10-17 10:16:56.566715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:53.562 [2024-10-17 10:16:56.566727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:53.562 [2024-10-17 10:16:56.566731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566738] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:53.562 [2024-10-17 10:16:56.566745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:53.562 [2024-10-17 10:16:56.566752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.562 [2024-10-17 10:16:56.566767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:53.562 [2024-10-17 10:16:56.566772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:53.562 [2024-10-17 10:16:56.566779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:53.562 [2024-10-17 10:16:56.566785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:53.562 [2024-10-17 10:16:56.566791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:53.562 [2024-10-17 10:16:56.566796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:53.562 [2024-10-17 10:16:56.566805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:53.562 [2024-10-17 10:16:56.566813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:53.563 [2024-10-17 10:16:56.566826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:53.563 [2024-10-17 10:16:56.566833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:53.563 [2024-10-17 10:16:56.566838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:53.563 [2024-10-17 10:16:56.566846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:53.563 [2024-10-17 10:16:56.566851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:53.563 [2024-10-17 10:16:56.566858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:53.563 [2024-10-17 10:16:56.566863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:53.563 [2024-10-17 10:16:56.566871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:53.563 [2024-10-17 10:16:56.566877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:53.563 [2024-10-17 10:16:56.566908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:53.563 [2024-10-17 10:16:56.566914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:53.563 [2024-10-17 10:16:56.566929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:53.563 [2024-10-17 10:16:56.566935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:53.563 [2024-10-17 10:16:56.566942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:53.563 [2024-10-17 10:16:56.566949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.563 [2024-10-17 10:16:56.566955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:53.563 [2024-10-17 10:16:56.566961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:53.563 [2024-10-17 10:16:56.566968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.563 [2024-10-17 10:16:56.567007] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:53.563 [2024-10-17 10:16:56.567018] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.858 [2024-10-17 10:16:59.437458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.437621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.858 [2024-10-17 10:16:59.437690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2870.441 ms 00:17:56.858 [2024-10-17 10:16:59.437717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.463340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.463485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.858 [2024-10-17 10:16:59.463551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.408 ms 00:17:56.858 [2024-10-17 10:16:59.463574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.463713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.463740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.858 [2024-10-17 10:16:59.463806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:56.858 [2024-10-17 10:16:59.463829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.505983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.506157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.858 [2024-10-17 10:16:59.506295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.101 ms 00:17:56.858 [2024-10-17 10:16:59.506322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.506370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.506473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.858 [2024-10-17 10:16:59.506510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.858 [2024-10-17 10:16:59.506533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.506899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.507066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.858 [2024-10-17 10:16:59.507098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:56.858 [2024-10-17 10:16:59.507118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.507240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.507324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.858 [2024-10-17 10:16:59.507352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:56.858 [2024-10-17 10:16:59.507371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.520502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.520610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.858 [2024-10-17 10:16:59.520661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.101 ms 00:17:56.858 [2024-10-17 10:16:59.520683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.532069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:56.858 [2024-10-17 10:16:59.537331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.537436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.858 [2024-10-17 10:16:59.537482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.551 ms 00:17:56.858 [2024-10-17 10:16:59.537507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.858 [2024-10-17 10:16:59.607779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.858 [2024-10-17 10:16:59.607974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.859 [2024-10-17 10:16:59.608042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.236 ms 00:17:56.859 [2024-10-17 10:16:59.608083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.608270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.608337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.859 [2024-10-17 10:16:59.608363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:56.859 [2024-10-17 10:16:59.608384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.633102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.633228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.859 [2024-10-17 10:16:59.633280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.640 ms 00:17:56.859 [2024-10-17 10:16:59.633306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.656875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.656995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.859 [2024-10-17 10:16:59.657078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.528 ms 00:17:56.859 [2024-10-17 10:16:59.657102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.657689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.657766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.859 [2024-10-17 10:16:59.657811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:56.859 [2024-10-17 10:16:59.657837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.731416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.731549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:56.859 [2024-10-17 10:16:59.731603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.534 ms 00:17:56.859 [2024-10-17 10:16:59.731628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.756656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.756781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:56.859 [2024-10-17 10:16:59.756834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:17:56.859 [2024-10-17 10:16:59.756860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.780711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.780828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:56.859 [2024-10-17 10:16:59.780878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.808 ms 00:17:56.859 [2024-10-17 10:16:59.780903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.805594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.805717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.859 [2024-10-17 10:16:59.805769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.651 ms 00:17:56.859 [2024-10-17 10:16:59.805794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.805838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.805869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.859 [2024-10-17 10:16:59.805889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.859 [2024-10-17 10:16:59.805909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.805995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.859 [2024-10-17 10:16:59.806085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.859 [2024-10-17 10:16:59.806132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:56.859 [2024-10-17 10:16:59.806154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.859 [2024-10-17 10:16:59.806975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3250.967 ms, result 0 00:17:56.859 { 00:17:56.859 "name": "ftl0", 00:17:56.859 "uuid": "4bbd3eb7-b9c2-4ea4-a746-14a88bd870fe" 00:17:56.859 } 00:17:56.859 10:16:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:56.859 10:16:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:56.859 10:16:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:57.117 10:17:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:57.117 [2024-10-17 10:17:00.115189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:57.117 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:57.117 Zero copy mechanism will not be used. 00:17:57.117 Running I/O for 4 seconds... 00:17:59.428 992.00 IOPS, 65.88 MiB/s [2024-10-17T10:17:03.452Z] 859.00 IOPS, 57.04 MiB/s [2024-10-17T10:17:04.386Z] 828.67 IOPS, 55.03 MiB/s [2024-10-17T10:17:04.386Z] 885.75 IOPS, 58.82 MiB/s 00:18:01.295 Latency(us) 00:18:01.295 [2024-10-17T10:17:04.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.295 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:01.295 ftl0 : 4.00 885.61 58.81 0.00 0.00 1184.03 206.38 2545.82 00:18:01.295 [2024-10-17T10:17:04.386Z] =================================================================================================================== 00:18:01.295 [2024-10-17T10:17:04.386Z] Total : 885.61 58.81 0.00 0.00 1184.03 206.38 2545.82 00:18:01.295 { 00:18:01.295 "results": [ 00:18:01.295 { 00:18:01.295 "job": "ftl0", 00:18:01.295 "core_mask": "0x1", 00:18:01.295 "workload": "randwrite", 00:18:01.295 "status": "finished", 00:18:01.295 "queue_depth": 1, 00:18:01.295 "io_size": 69632, 00:18:01.295 "runtime": 4.00175, 00:18:01.295 "iops": 885.6125445117762, 00:18:01.295 "mibps": 58.810208033985134, 00:18:01.295 "io_failed": 0, 00:18:01.295 "io_timeout": 0, 00:18:01.295 "avg_latency_us": 1184.0326723389478, 00:18:01.295 "min_latency_us": 206.3753846153846, 00:18:01.295 "max_latency_us": 2545.8215384615382 00:18:01.295 } 00:18:01.295 ], 00:18:01.295 "core_count": 1 00:18:01.295 } 00:18:01.295 [2024-10-17 10:17:04.125530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:01.295 10:17:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:01.295 [2024-10-17 10:17:04.228514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:01.295 Running I/O for 4 seconds... 00:18:03.166 5996.00 IOPS, 23.42 MiB/s [2024-10-17T10:17:07.245Z] 6634.50 IOPS, 25.92 MiB/s [2024-10-17T10:17:08.619Z] 6479.67 IOPS, 25.31 MiB/s [2024-10-17T10:17:08.619Z] 6474.00 IOPS, 25.29 MiB/s 00:18:05.528 Latency(us) 00:18:05.528 [2024-10-17T10:17:08.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.528 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.528 ftl0 : 4.03 6453.92 25.21 0.00 0.00 19767.20 256.79 121796.14 00:18:05.528 [2024-10-17T10:17:08.619Z] =================================================================================================================== 00:18:05.528 [2024-10-17T10:17:08.619Z] Total : 6453.92 25.21 0.00 0.00 19767.20 0.00 121796.14 00:18:05.528 [2024-10-17 10:17:08.268328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:05.528 { 00:18:05.528 "results": [ 00:18:05.528 { 00:18:05.528 "job": "ftl0", 00:18:05.528 "core_mask": "0x1", 00:18:05.528 "workload": "randwrite", 00:18:05.528 "status": "finished", 00:18:05.528 "queue_depth": 128, 00:18:05.528 "io_size": 4096, 00:18:05.528 "runtime": 4.030418, 00:18:05.528 "iops": 6453.921156564902, 00:18:05.528 "mibps": 25.21062951783165, 00:18:05.528 "io_failed": 0, 00:18:05.528 "io_timeout": 0, 00:18:05.528 "avg_latency_us": 19767.199873194622, 00:18:05.528 "min_latency_us": 256.7876923076923, 00:18:05.528 "max_latency_us": 121796.13538461538 00:18:05.528 } 00:18:05.528 ], 00:18:05.528 "core_count": 1 00:18:05.528 } 00:18:05.528 10:17:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:05.528 [2024-10-17 10:17:08.381577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:05.528 Running I/O for 4 seconds... 00:18:07.398 5820.00 IOPS, 22.73 MiB/s [2024-10-17T10:17:11.422Z] 5480.00 IOPS, 21.41 MiB/s [2024-10-17T10:17:12.796Z] 5406.67 IOPS, 21.12 MiB/s [2024-10-17T10:17:12.796Z] 5312.50 IOPS, 20.75 MiB/s 00:18:09.705 Latency(us) 00:18:09.705 [2024-10-17T10:17:12.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.705 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:09.705 Verification LBA range: start 0x0 length 0x1400000 00:18:09.705 ftl0 : 4.01 5324.72 20.80 0.00 0.00 23964.27 281.99 36700.16 00:18:09.705 [2024-10-17T10:17:12.796Z] =================================================================================================================== 00:18:09.705 [2024-10-17T10:17:12.796Z] Total : 5324.72 20.80 0.00 0.00 23964.27 0.00 36700.16 00:18:09.705 [2024-10-17 10:17:12.411259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:18:09.705 "results": [ 00:18:09.705 { 00:18:09.705 "job": "ftl0", 00:18:09.705 "core_mask": "0x1", 00:18:09.705 "workload": "verify", 00:18:09.705 "status": "finished", 00:18:09.705 "verify_range": { 00:18:09.705 "start": 0, 00:18:09.705 "length": 20971520 00:18:09.706 }, 00:18:09.706 "queue_depth": 128, 00:18:09.706 "io_size": 4096, 00:18:09.706 "runtime": 4.014862, 00:18:09.706 "iops": 5324.716017636471, 00:18:09.706 "mibps": 20.799671943892466, 00:18:09.706 "io_failed": 0, 00:18:09.706 "io_timeout": 0, 00:18:09.706 "avg_latency_us": 23964.273430197834, 00:18:09.706 "min_latency_us": 281.99384615384616, 00:18:09.706 "max_latency_us": 36700.16 00:18:09.706 } 00:18:09.706 ], 00:18:09.706 "core_count": 1 00:18:09.706 } 00:18:09.706 l0 00:18:09.706 10:17:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:09.706 [2024-10-17 10:17:12.617503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.706 [2024-10-17 10:17:12.617556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.706 [2024-10-17 10:17:12.617570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:09.706 [2024-10-17 10:17:12.617580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-10-17 10:17:12.617602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.706 [2024-10-17 10:17:12.620261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.706 [2024-10-17 10:17:12.620290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.706 [2024-10-17 10:17:12.620302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:18:09.706 [2024-10-17 10:17:12.620311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-10-17 10:17:12.622884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.706 [2024-10-17 10:17:12.622917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.706 [2024-10-17 10:17:12.622931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:18:09.706 [2024-10-17 10:17:12.622939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.964 [2024-10-17 10:17:12.810805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.964 [2024-10-17 10:17:12.810856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.964 [2024-10-17 10:17:12.810878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 187.839 ms 00:18:09.964 [2024-10-17 10:17:12.810886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.964 [2024-10-17 10:17:12.817063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.964 [2024-10-17 10:17:12.817187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.964 [2024-10-17 10:17:12.817208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.144 ms 00:18:09.964 [2024-10-17 10:17:12.817217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.964 [2024-10-17 10:17:12.841360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.964 [2024-10-17 10:17:12.841391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.964 [2024-10-17 10:17:12.841404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.083 ms 00:18:09.964 [2024-10-17 10:17:12.841412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.857228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.857261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.965 [2024-10-17 10:17:12.857276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.784 ms 00:18:09.965 [2024-10-17 10:17:12.857284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.857421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.857432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.965 [2024-10-17 10:17:12.857444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:09.965 [2024-10-17 10:17:12.857452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.881380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.881409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.965 [2024-10-17 10:17:12.881421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:18:09.965 [2024-10-17 10:17:12.881428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.904770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.904895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.965 [2024-10-17 10:17:12.904914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.309 ms 00:18:09.965 [2024-10-17 10:17:12.904921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.927516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.927634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.965 [2024-10-17 10:17:12.927652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.564 ms 00:18:09.965 [2024-10-17 10:17:12.927660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.950083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.965 [2024-10-17 10:17:12.950196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.965 [2024-10-17 10:17:12.950216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.363 ms 00:18:09.965 [2024-10-17 10:17:12.950224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.965 [2024-10-17 10:17:12.950252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.965 [2024-10-17 10:17:12.950265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.965 [2024-10-17 10:17:12.950878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.950993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.966 [2024-10-17 10:17:12.951145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.966 [2024-10-17 10:17:12.951154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4bbd3eb7-b9c2-4ea4-a746-14a88bd870fe 00:18:09.966 [2024-10-17 10:17:12.951162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.966 [2024-10-17 10:17:12.951171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.966 [2024-10-17 10:17:12.951178] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.966 [2024-10-17 10:17:12.951187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.966 [2024-10-17 10:17:12.951195] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.966 [2024-10-17 10:17:12.951204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.966 [2024-10-17 10:17:12.951211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.966 [2024-10-17 10:17:12.951221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.966 [2024-10-17 10:17:12.951227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.966 [2024-10-17 10:17:12.951236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.966 [2024-10-17 10:17:12.951243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.966 [2024-10-17 10:17:12.951252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:18:09.966 [2024-10-17 10:17:12.951260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.963558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.966 [2024-10-17 10:17:12.963587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.966 [2024-10-17 10:17:12.963603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.269 ms 00:18:09.966 [2024-10-17 10:17:12.963611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.963961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.966 [2024-10-17 10:17:12.963970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.966 [2024-10-17 10:17:12.963980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:09.966 [2024-10-17 10:17:12.963987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.998797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.966 [2024-10-17 10:17:12.998832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.966 [2024-10-17 10:17:12.998849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.966 [2024-10-17 10:17:12.998858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.998917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.966 [2024-10-17 10:17:12.998924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.966 [2024-10-17 10:17:12.998933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.966 [2024-10-17 10:17:12.998940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.999002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.966 [2024-10-17 10:17:12.999012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.966 [2024-10-17 10:17:12.999021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.966 [2024-10-17 10:17:12.999030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.966 [2024-10-17 10:17:12.999069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.966 [2024-10-17 10:17:12.999078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.966 [2024-10-17 10:17:12.999088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.966 [2024-10-17 10:17:12.999095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.075375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.075414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.224 [2024-10-17 10:17:13.075431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.075440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.138647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.138693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.224 [2024-10-17 10:17:13.138706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.138714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.138795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.138804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.224 [2024-10-17 10:17:13.138814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.138821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.138865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.138874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.224 [2024-10-17 10:17:13.138884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.138891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.138976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.138985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.224 [2024-10-17 10:17:13.138997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.139004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.139037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.139071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.224 [2024-10-17 10:17:13.139082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.139090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.139124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.139132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.224 [2024-10-17 10:17:13.139142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.139149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.139193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.224 [2024-10-17 10:17:13.139208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.224 [2024-10-17 10:17:13.139218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.224 [2024-10-17 10:17:13.139225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.224 [2024-10-17 10:17:13.139340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.804 ms, result 0 00:18:10.224 true 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73331 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73331 ']' 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73331 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73331 00:18:10.224 killing process with pid 73331 00:18:10.224 Received shutdown signal, test time was about 4.000000 seconds 00:18:10.224 00:18:10.224 Latency(us) 00:18:10.224 [2024-10-17T10:17:13.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.224 [2024-10-17T10:17:13.315Z] =================================================================================================================== 00:18:10.224 [2024-10-17T10:17:13.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73331' 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73331 00:18:10.224 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73331 00:18:11.171 Remove shared memory files 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:11.171 ************************************ 00:18:11.171 END TEST ftl_bdevperf 00:18:11.171 ************************************ 00:18:11.171 00:18:11.171 real 0m21.268s 00:18:11.171 user 0m23.908s 00:18:11.171 sys 0m0.815s 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.171 10:17:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 10:17:14 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:11.171 10:17:14 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:11.171 10:17:14 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.171 10:17:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 ************************************ 00:18:11.171 START TEST ftl_trim 00:18:11.171 ************************************ 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:11.171 * Looking for test storage... 00:18:11.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.171 10:17:14 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.171 --rc genhtml_branch_coverage=1 00:18:11.171 --rc genhtml_function_coverage=1 00:18:11.171 --rc genhtml_legend=1 00:18:11.171 --rc geninfo_all_blocks=1 00:18:11.171 --rc geninfo_unexecuted_blocks=1 00:18:11.171 00:18:11.171 ' 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.171 --rc genhtml_branch_coverage=1 00:18:11.171 --rc genhtml_function_coverage=1 00:18:11.171 --rc genhtml_legend=1 00:18:11.171 --rc geninfo_all_blocks=1 00:18:11.171 --rc geninfo_unexecuted_blocks=1 00:18:11.171 00:18:11.171 ' 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.171 --rc genhtml_branch_coverage=1 00:18:11.171 --rc genhtml_function_coverage=1 00:18:11.171 --rc genhtml_legend=1 00:18:11.171 --rc geninfo_all_blocks=1 00:18:11.171 --rc geninfo_unexecuted_blocks=1 00:18:11.171 00:18:11.171 ' 00:18:11.171 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.171 --rc genhtml_branch_coverage=1 00:18:11.171 --rc genhtml_function_coverage=1 00:18:11.171 --rc genhtml_legend=1 00:18:11.171 --rc geninfo_all_blocks=1 00:18:11.171 --rc geninfo_unexecuted_blocks=1 00:18:11.171 00:18:11.171 ' 00:18:11.171 10:17:14 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:11.171 10:17:14 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:11.171 10:17:14 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.171 10:17:14 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.171 10:17:14 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73674 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73674 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73674 ']' 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.172 10:17:14 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.172 10:17:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:11.461 [2024-10-17 10:17:14.275230] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:18:11.461 [2024-10-17 10:17:14.275494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73674 ] 00:18:11.461 [2024-10-17 10:17:14.426554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:11.461 [2024-10-17 10:17:14.531949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.461 [2024-10-17 10:17:14.532348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.461 [2024-10-17 10:17:14.532245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.402 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.402 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:12.402 10:17:15 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:12.660 10:17:15 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:12.660 10:17:15 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:12.660 10:17:15 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:12.660 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:12.660 { 00:18:12.660 "name": "nvme0n1", 00:18:12.660 "aliases": [ 00:18:12.660 "e75b75b6-0b00-42dc-9722-77d6c1006f6a" 00:18:12.660 ], 00:18:12.660 "product_name": "NVMe disk", 00:18:12.660 "block_size": 4096, 00:18:12.660 "num_blocks": 1310720, 00:18:12.660 "uuid": "e75b75b6-0b00-42dc-9722-77d6c1006f6a", 00:18:12.660 "numa_id": -1, 00:18:12.660 "assigned_rate_limits": { 00:18:12.660 "rw_ios_per_sec": 0, 00:18:12.660 "rw_mbytes_per_sec": 0, 00:18:12.660 "r_mbytes_per_sec": 0, 00:18:12.660 "w_mbytes_per_sec": 0 00:18:12.660 }, 00:18:12.661 "claimed": true, 00:18:12.661 "claim_type": "read_many_write_one", 00:18:12.661 "zoned": false, 00:18:12.661 "supported_io_types": { 00:18:12.661 "read": true, 00:18:12.661 "write": true, 00:18:12.661 "unmap": true, 00:18:12.661 "flush": true, 00:18:12.661 "reset": true, 00:18:12.661 "nvme_admin": true, 00:18:12.661 "nvme_io": true, 00:18:12.661 "nvme_io_md": false, 00:18:12.661 "write_zeroes": true, 00:18:12.661 "zcopy": false, 00:18:12.661 "get_zone_info": false, 00:18:12.661 "zone_management": false, 00:18:12.661 "zone_append": false, 00:18:12.661 "compare": true, 00:18:12.661 "compare_and_write": false, 00:18:12.661 "abort": true, 00:18:12.661 "seek_hole": false, 00:18:12.661 "seek_data": false, 00:18:12.661 "copy": true, 00:18:12.661 "nvme_iov_md": false 00:18:12.661 }, 00:18:12.661 "driver_specific": { 00:18:12.661 "nvme": [ 00:18:12.661 { 00:18:12.661 "pci_address": "0000:00:11.0", 00:18:12.661 "trid": { 00:18:12.661 "trtype": "PCIe", 00:18:12.661 "traddr": "0000:00:11.0" 00:18:12.661 }, 00:18:12.661 "ctrlr_data": { 00:18:12.661 "cntlid": 0, 00:18:12.661 "vendor_id": "0x1b36", 00:18:12.661 "model_number": "QEMU NVMe Ctrl", 00:18:12.661 "serial_number": "12341", 00:18:12.661 "firmware_revision": "8.0.0", 00:18:12.661 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:12.661 "oacs": { 00:18:12.661 "security": 0, 00:18:12.661 "format": 1, 00:18:12.661 "firmware": 0, 00:18:12.661 "ns_manage": 1 00:18:12.661 }, 00:18:12.661 "multi_ctrlr": false, 00:18:12.661 "ana_reporting": false 00:18:12.661 }, 00:18:12.661 "vs": { 00:18:12.661 "nvme_version": "1.4" 00:18:12.661 }, 00:18:12.661 "ns_data": { 00:18:12.661 "id": 1, 00:18:12.661 "can_share": false 00:18:12.661 } 00:18:12.661 } 00:18:12.661 ], 00:18:12.661 "mp_policy": "active_passive" 00:18:12.661 } 00:18:12.661 } 00:18:12.661 ]' 00:18:12.661 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:12.661 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:12.661 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:12.918 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:12.918 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:12.918 10:17:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=65463d56-515a-46a1-af7c-48da07e2ad34 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:12.918 10:17:15 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65463d56-515a-46a1-af7c-48da07e2ad34 00:18:13.176 10:17:16 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:13.433 10:17:16 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7a805d68-6ac4-4d87-9fb9-5737246be389 00:18:13.433 10:17:16 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7a805d68-6ac4-4d87-9fb9-5737246be389 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:13.690 10:17:16 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.690 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.690 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:13.690 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:13.690 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:13.690 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:13.948 { 00:18:13.948 "name": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:13.948 "aliases": [ 00:18:13.948 "lvs/nvme0n1p0" 00:18:13.948 ], 00:18:13.948 "product_name": "Logical Volume", 00:18:13.948 "block_size": 4096, 00:18:13.948 "num_blocks": 26476544, 00:18:13.948 "uuid": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:13.948 "assigned_rate_limits": { 00:18:13.948 "rw_ios_per_sec": 0, 00:18:13.948 "rw_mbytes_per_sec": 0, 00:18:13.948 "r_mbytes_per_sec": 0, 00:18:13.948 "w_mbytes_per_sec": 0 00:18:13.948 }, 00:18:13.948 "claimed": false, 00:18:13.948 "zoned": false, 00:18:13.948 "supported_io_types": { 00:18:13.948 "read": true, 00:18:13.948 "write": true, 00:18:13.948 "unmap": true, 00:18:13.948 "flush": false, 00:18:13.948 "reset": true, 00:18:13.948 "nvme_admin": false, 00:18:13.948 "nvme_io": false, 00:18:13.948 "nvme_io_md": false, 00:18:13.948 "write_zeroes": true, 00:18:13.948 "zcopy": false, 00:18:13.948 "get_zone_info": false, 00:18:13.948 "zone_management": false, 00:18:13.948 "zone_append": false, 00:18:13.948 "compare": false, 00:18:13.948 "compare_and_write": false, 00:18:13.948 "abort": false, 00:18:13.948 "seek_hole": true, 00:18:13.948 "seek_data": true, 00:18:13.948 "copy": false, 00:18:13.948 "nvme_iov_md": false 00:18:13.948 }, 00:18:13.948 "driver_specific": { 00:18:13.948 "lvol": { 00:18:13.948 "lvol_store_uuid": "7a805d68-6ac4-4d87-9fb9-5737246be389", 00:18:13.948 "base_bdev": "nvme0n1", 00:18:13.948 "thin_provision": true, 00:18:13.948 "num_allocated_clusters": 0, 00:18:13.948 "snapshot": false, 00:18:13.948 "clone": false, 00:18:13.948 "esnap_clone": false 00:18:13.948 } 00:18:13.948 } 00:18:13.948 } 00:18:13.948 ]' 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:13.948 10:17:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:13.948 10:17:16 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:13.948 10:17:16 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:13.948 10:17:16 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:14.206 10:17:17 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:14.206 10:17:17 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:14.206 10:17:17 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.206 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.206 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:14.206 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:14.206 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:14.206 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:14.463 { 00:18:14.463 "name": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:14.463 "aliases": [ 00:18:14.463 "lvs/nvme0n1p0" 00:18:14.463 ], 00:18:14.463 "product_name": "Logical Volume", 00:18:14.463 "block_size": 4096, 00:18:14.463 "num_blocks": 26476544, 00:18:14.463 "uuid": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:14.463 "assigned_rate_limits": { 00:18:14.463 "rw_ios_per_sec": 0, 00:18:14.463 "rw_mbytes_per_sec": 0, 00:18:14.463 "r_mbytes_per_sec": 0, 00:18:14.463 "w_mbytes_per_sec": 0 00:18:14.463 }, 00:18:14.463 "claimed": false, 00:18:14.463 "zoned": false, 00:18:14.463 "supported_io_types": { 00:18:14.463 "read": true, 00:18:14.463 "write": true, 00:18:14.463 "unmap": true, 00:18:14.463 "flush": false, 00:18:14.463 "reset": true, 00:18:14.463 "nvme_admin": false, 00:18:14.463 "nvme_io": false, 00:18:14.463 "nvme_io_md": false, 00:18:14.463 "write_zeroes": true, 00:18:14.463 "zcopy": false, 00:18:14.463 "get_zone_info": false, 00:18:14.463 "zone_management": false, 00:18:14.463 "zone_append": false, 00:18:14.463 "compare": false, 00:18:14.463 "compare_and_write": false, 00:18:14.463 "abort": false, 00:18:14.463 "seek_hole": true, 00:18:14.463 "seek_data": true, 00:18:14.463 "copy": false, 00:18:14.463 "nvme_iov_md": false 00:18:14.463 }, 00:18:14.463 "driver_specific": { 00:18:14.463 "lvol": { 00:18:14.463 "lvol_store_uuid": "7a805d68-6ac4-4d87-9fb9-5737246be389", 00:18:14.463 "base_bdev": "nvme0n1", 00:18:14.463 "thin_provision": true, 00:18:14.463 "num_allocated_clusters": 0, 00:18:14.463 "snapshot": false, 00:18:14.463 "clone": false, 00:18:14.463 "esnap_clone": false 00:18:14.463 } 00:18:14.463 } 00:18:14.463 } 00:18:14.463 ]' 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:14.463 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:14.464 10:17:17 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:14.464 10:17:17 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:14.722 10:17:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:14.722 10:17:17 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:14.722 10:17:17 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.722 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.722 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:14.722 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:14.722 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:14.722 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42844582-5342-44c6-8f05-71c9e83b89b0 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:14.980 { 00:18:14.980 "name": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:14.980 "aliases": [ 00:18:14.980 "lvs/nvme0n1p0" 00:18:14.980 ], 00:18:14.980 "product_name": "Logical Volume", 00:18:14.980 "block_size": 4096, 00:18:14.980 "num_blocks": 26476544, 00:18:14.980 "uuid": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:14.980 "assigned_rate_limits": { 00:18:14.980 "rw_ios_per_sec": 0, 00:18:14.980 "rw_mbytes_per_sec": 0, 00:18:14.980 "r_mbytes_per_sec": 0, 00:18:14.980 "w_mbytes_per_sec": 0 00:18:14.980 }, 00:18:14.980 "claimed": false, 00:18:14.980 "zoned": false, 00:18:14.980 "supported_io_types": { 00:18:14.980 "read": true, 00:18:14.980 "write": true, 00:18:14.980 "unmap": true, 00:18:14.980 "flush": false, 00:18:14.980 "reset": true, 00:18:14.980 "nvme_admin": false, 00:18:14.980 "nvme_io": false, 00:18:14.980 "nvme_io_md": false, 00:18:14.980 "write_zeroes": true, 00:18:14.980 "zcopy": false, 00:18:14.980 "get_zone_info": false, 00:18:14.980 "zone_management": false, 00:18:14.980 "zone_append": false, 00:18:14.980 "compare": false, 00:18:14.980 "compare_and_write": false, 00:18:14.980 "abort": false, 00:18:14.980 "seek_hole": true, 00:18:14.980 "seek_data": true, 00:18:14.980 "copy": false, 00:18:14.980 "nvme_iov_md": false 00:18:14.980 }, 00:18:14.980 "driver_specific": { 00:18:14.980 "lvol": { 00:18:14.980 "lvol_store_uuid": "7a805d68-6ac4-4d87-9fb9-5737246be389", 00:18:14.980 "base_bdev": "nvme0n1", 00:18:14.980 "thin_provision": true, 00:18:14.980 "num_allocated_clusters": 0, 00:18:14.980 "snapshot": false, 00:18:14.980 "clone": false, 00:18:14.980 "esnap_clone": false 00:18:14.980 } 00:18:14.980 } 00:18:14.980 } 00:18:14.980 ]' 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:14.980 10:17:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:14.980 10:17:17 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:14.980 10:17:17 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 42844582-5342-44c6-8f05-71c9e83b89b0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:14.980 [2024-10-17 10:17:18.062876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.980 [2024-10-17 10:17:18.062923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.980 [2024-10-17 10:17:18.062939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:14.980 [2024-10-17 10:17:18.062947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.980 [2024-10-17 10:17:18.065739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.980 [2024-10-17 10:17:18.065873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.980 [2024-10-17 10:17:18.065895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:18:14.980 [2024-10-17 10:17:18.065903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.980 [2024-10-17 10:17:18.066413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.980 [2024-10-17 10:17:18.067243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.980 [2024-10-17 10:17:18.067285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.980 [2024-10-17 10:17:18.067295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.980 [2024-10-17 10:17:18.067307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:18:14.980 [2024-10-17 10:17:18.067314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.980 [2024-10-17 10:17:18.067389] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:14.980 [2024-10-17 10:17:18.068537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.980 [2024-10-17 10:17:18.068572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:14.980 [2024-10-17 10:17:18.068583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:14.980 [2024-10-17 10:17:18.068595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.238 [2024-10-17 10:17:18.073920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.238 [2024-10-17 10:17:18.073952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.238 [2024-10-17 10:17:18.073961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:18:15.238 [2024-10-17 10:17:18.073970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.238 [2024-10-17 10:17:18.074115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.238 [2024-10-17 10:17:18.074131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.239 [2024-10-17 10:17:18.074140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:15.239 [2024-10-17 10:17:18.074152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.074184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.239 [2024-10-17 10:17:18.074194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.239 [2024-10-17 10:17:18.074202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:15.239 [2024-10-17 10:17:18.074211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.074240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:15.239 [2024-10-17 10:17:18.077836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.239 [2024-10-17 10:17:18.077864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.239 [2024-10-17 10:17:18.077875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.599 ms 00:18:15.239 [2024-10-17 10:17:18.077882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.077940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.239 [2024-10-17 10:17:18.077949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.239 [2024-10-17 10:17:18.077958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:15.239 [2024-10-17 10:17:18.077977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.078015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:15.239 [2024-10-17 10:17:18.078181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:15.239 [2024-10-17 10:17:18.078197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.239 [2024-10-17 10:17:18.078208] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:15.239 [2024-10-17 10:17:18.078219] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078228] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:15.239 [2024-10-17 10:17:18.078244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.239 [2024-10-17 10:17:18.078253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:15.239 [2024-10-17 10:17:18.078260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:15.239 [2024-10-17 10:17:18.078269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.239 [2024-10-17 10:17:18.078277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.239 [2024-10-17 10:17:18.078288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:18:15.239 [2024-10-17 10:17:18.078295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.078391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.239 [2024-10-17 10:17:18.078399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.239 [2024-10-17 10:17:18.078408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:15.239 [2024-10-17 10:17:18.078415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.239 [2024-10-17 10:17:18.078536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.239 [2024-10-17 10:17:18.078545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.239 [2024-10-17 10:17:18.078555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.239 [2024-10-17 10:17:18.078579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.239 [2024-10-17 10:17:18.078602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.239 [2024-10-17 10:17:18.078617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.239 [2024-10-17 10:17:18.078624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:15.239 [2024-10-17 10:17:18.078632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.239 [2024-10-17 10:17:18.078639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.239 [2024-10-17 10:17:18.078647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:15.239 [2024-10-17 10:17:18.078654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.239 [2024-10-17 10:17:18.078669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.239 [2024-10-17 10:17:18.078693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.239 [2024-10-17 10:17:18.078716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.239 [2024-10-17 10:17:18.078739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.239 [2024-10-17 10:17:18.078759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.239 [2024-10-17 10:17:18.078784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.239 [2024-10-17 10:17:18.078798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.239 [2024-10-17 10:17:18.078805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:15.239 [2024-10-17 10:17:18.078812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.239 [2024-10-17 10:17:18.078819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:15.239 [2024-10-17 10:17:18.078827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:15.239 [2024-10-17 10:17:18.078833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:15.239 [2024-10-17 10:17:18.078847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:15.239 [2024-10-17 10:17:18.078855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.239 [2024-10-17 10:17:18.078870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.239 [2024-10-17 10:17:18.078877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.239 [2024-10-17 10:17:18.078894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.239 [2024-10-17 10:17:18.078904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.239 [2024-10-17 10:17:18.078911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.239 [2024-10-17 10:17:18.078919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.239 [2024-10-17 10:17:18.078926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.239 [2024-10-17 10:17:18.078934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.239 [2024-10-17 10:17:18.078943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.239 [2024-10-17 10:17:18.078954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.078964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:15.239 [2024-10-17 10:17:18.078973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:15.239 [2024-10-17 10:17:18.078981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:15.239 [2024-10-17 10:17:18.078989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:15.239 [2024-10-17 10:17:18.078996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:15.239 [2024-10-17 10:17:18.079005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:15.239 [2024-10-17 10:17:18.079012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:15.239 [2024-10-17 10:17:18.079020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:15.239 [2024-10-17 10:17:18.079027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:15.239 [2024-10-17 10:17:18.079037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.079044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.079300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.079333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.079363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:15.239 [2024-10-17 10:17:18.079392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.239 [2024-10-17 10:17:18.079423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.239 [2024-10-17 10:17:18.079453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.240 [2024-10-17 10:17:18.079607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.240 [2024-10-17 10:17:18.079704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.240 [2024-10-17 10:17:18.079793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.240 [2024-10-17 10:17:18.079824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.240 [2024-10-17 10:17:18.079854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.240 [2024-10-17 10:17:18.079874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:18:15.240 [2024-10-17 10:17:18.079895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.240 [2024-10-17 10:17:18.079980] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:15.240 [2024-10-17 10:17:18.080082] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:20.502 [2024-10-17 10:17:22.734472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.502 [2024-10-17 10:17:22.734650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:20.502 [2024-10-17 10:17:22.734715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4654.475 ms 00:18:20.502 [2024-10-17 10:17:22.734742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.502 [2024-10-17 10:17:22.760680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.502 [2024-10-17 10:17:22.760822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.502 [2024-10-17 10:17:22.760888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.610 ms 00:18:20.502 [2024-10-17 10:17:22.760915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.502 [2024-10-17 10:17:22.761102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.502 [2024-10-17 10:17:22.761174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.503 [2024-10-17 10:17:22.761225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:20.503 [2024-10-17 10:17:22.761252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.801073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.801127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.503 [2024-10-17 10:17:22.801149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.758 ms 00:18:20.503 [2024-10-17 10:17:22.801163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.801290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.801310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.503 [2024-10-17 10:17:22.801323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.503 [2024-10-17 10:17:22.801335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.801719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.801751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.503 [2024-10-17 10:17:22.801765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:18:20.503 [2024-10-17 10:17:22.801781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.801946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.801960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.503 [2024-10-17 10:17:22.801972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:18:20.503 [2024-10-17 10:17:22.801987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.818655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.818687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.503 [2024-10-17 10:17:22.818698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.612 ms 00:18:20.503 [2024-10-17 10:17:22.818707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.830083] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:20.503 [2024-10-17 10:17:22.844257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.844395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.503 [2024-10-17 10:17:22.844415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.459 ms 00:18:20.503 [2024-10-17 10:17:22.844426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.915343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.915500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:20.503 [2024-10-17 10:17:22.915523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.853 ms 00:18:20.503 [2024-10-17 10:17:22.915537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.915751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.915764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.503 [2024-10-17 10:17:22.915776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:20.503 [2024-10-17 10:17:22.915784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:22.939218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:22.939250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:20.503 [2024-10-17 10:17:22.939265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.405 ms 00:18:20.503 [2024-10-17 10:17:22.939273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.076685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.076721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:20.503 [2024-10-17 10:17:23.076736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.351 ms 00:18:20.503 [2024-10-17 10:17:23.076743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.077357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.077383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.503 [2024-10-17 10:17:23.077394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:18:20.503 [2024-10-17 10:17:23.077401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.151262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.151298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:20.503 [2024-10-17 10:17:23.151314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.826 ms 00:18:20.503 [2024-10-17 10:17:23.151326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.175808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.175844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:20.503 [2024-10-17 10:17:23.175857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.385 ms 00:18:20.503 [2024-10-17 10:17:23.175865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.200243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.200280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:20.503 [2024-10-17 10:17:23.200293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.318 ms 00:18:20.503 [2024-10-17 10:17:23.200301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.223811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.223853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.503 [2024-10-17 10:17:23.223866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.434 ms 00:18:20.503 [2024-10-17 10:17:23.223886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.223952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.223962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.503 [2024-10-17 10:17:23.223973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:20.503 [2024-10-17 10:17:23.223982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.224074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-10-17 10:17:23.224084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.503 [2024-10-17 10:17:23.224094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:20.503 [2024-10-17 10:17:23.224101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-10-17 10:17:23.224862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.503 [2024-10-17 10:17:23.227950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5161.718 ms, result 0 00:18:20.503 [2024-10-17 10:17:23.228815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.503 { 00:18:20.503 "name": "ftl0", 00:18:20.503 "uuid": "cca8bd94-b064-499b-a946-5f5c31e51e40" 00:18:20.503 } 00:18:20.503 10:17:23 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:20.503 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:20.761 [ 00:18:20.761 { 00:18:20.761 "name": "ftl0", 00:18:20.761 "aliases": [ 00:18:20.761 "cca8bd94-b064-499b-a946-5f5c31e51e40" 00:18:20.761 ], 00:18:20.761 "product_name": "FTL disk", 00:18:20.761 "block_size": 4096, 00:18:20.761 "num_blocks": 23592960, 00:18:20.761 "uuid": "cca8bd94-b064-499b-a946-5f5c31e51e40", 00:18:20.761 "assigned_rate_limits": { 00:18:20.761 "rw_ios_per_sec": 0, 00:18:20.761 "rw_mbytes_per_sec": 0, 00:18:20.761 "r_mbytes_per_sec": 0, 00:18:20.761 "w_mbytes_per_sec": 0 00:18:20.761 }, 00:18:20.761 "claimed": false, 00:18:20.761 "zoned": false, 00:18:20.761 "supported_io_types": { 00:18:20.761 "read": true, 00:18:20.761 "write": true, 00:18:20.761 "unmap": true, 00:18:20.761 "flush": true, 00:18:20.761 "reset": false, 00:18:20.761 "nvme_admin": false, 00:18:20.761 "nvme_io": false, 00:18:20.761 "nvme_io_md": false, 00:18:20.761 "write_zeroes": true, 00:18:20.761 "zcopy": false, 00:18:20.761 "get_zone_info": false, 00:18:20.761 "zone_management": false, 00:18:20.761 "zone_append": false, 00:18:20.761 "compare": false, 00:18:20.761 "compare_and_write": false, 00:18:20.761 "abort": false, 00:18:20.761 "seek_hole": false, 00:18:20.761 "seek_data": false, 00:18:20.761 "copy": false, 00:18:20.761 "nvme_iov_md": false 00:18:20.761 }, 00:18:20.761 "driver_specific": { 00:18:20.761 "ftl": { 00:18:20.761 "base_bdev": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:20.761 "cache": "nvc0n1p0" 00:18:20.761 } 00:18:20.761 } 00:18:20.761 } 00:18:20.761 ] 00:18:20.761 10:17:23 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:20.761 10:17:23 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:20.761 10:17:23 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:20.761 10:17:23 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:20.761 10:17:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:21.019 10:17:24 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:21.019 { 00:18:21.019 "name": "ftl0", 00:18:21.019 "aliases": [ 00:18:21.019 "cca8bd94-b064-499b-a946-5f5c31e51e40" 00:18:21.019 ], 00:18:21.019 "product_name": "FTL disk", 00:18:21.019 "block_size": 4096, 00:18:21.019 "num_blocks": 23592960, 00:18:21.019 "uuid": "cca8bd94-b064-499b-a946-5f5c31e51e40", 00:18:21.019 "assigned_rate_limits": { 00:18:21.019 "rw_ios_per_sec": 0, 00:18:21.019 "rw_mbytes_per_sec": 0, 00:18:21.019 "r_mbytes_per_sec": 0, 00:18:21.019 "w_mbytes_per_sec": 0 00:18:21.019 }, 00:18:21.019 "claimed": false, 00:18:21.019 "zoned": false, 00:18:21.019 "supported_io_types": { 00:18:21.019 "read": true, 00:18:21.019 "write": true, 00:18:21.019 "unmap": true, 00:18:21.019 "flush": true, 00:18:21.019 "reset": false, 00:18:21.019 "nvme_admin": false, 00:18:21.019 "nvme_io": false, 00:18:21.019 "nvme_io_md": false, 00:18:21.019 "write_zeroes": true, 00:18:21.019 "zcopy": false, 00:18:21.019 "get_zone_info": false, 00:18:21.019 "zone_management": false, 00:18:21.019 "zone_append": false, 00:18:21.019 "compare": false, 00:18:21.019 "compare_and_write": false, 00:18:21.019 "abort": false, 00:18:21.019 "seek_hole": false, 00:18:21.019 "seek_data": false, 00:18:21.019 "copy": false, 00:18:21.019 "nvme_iov_md": false 00:18:21.019 }, 00:18:21.019 "driver_specific": { 00:18:21.019 "ftl": { 00:18:21.019 "base_bdev": "42844582-5342-44c6-8f05-71c9e83b89b0", 00:18:21.019 "cache": "nvc0n1p0" 00:18:21.019 } 00:18:21.019 } 00:18:21.019 } 00:18:21.019 ]' 00:18:21.019 10:17:24 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:21.019 10:17:24 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:21.019 10:17:24 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:21.278 [2024-10-17 10:17:24.244099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.244148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:21.278 [2024-10-17 10:17:24.244161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:21.278 [2024-10-17 10:17:24.244170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.244205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:21.278 [2024-10-17 10:17:24.246818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.246956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:21.278 [2024-10-17 10:17:24.246979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:18:21.278 [2024-10-17 10:17:24.246987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.247484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.247500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:21.278 [2024-10-17 10:17:24.247511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:21.278 [2024-10-17 10:17:24.247519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.251168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.251189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:21.278 [2024-10-17 10:17:24.251200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:18:21.278 [2024-10-17 10:17:24.251210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.258191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.258218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:21.278 [2024-10-17 10:17:24.258230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.926 ms 00:18:21.278 [2024-10-17 10:17:24.258237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.281504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.281618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:21.278 [2024-10-17 10:17:24.281640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:18:21.278 [2024-10-17 10:17:24.281647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.296504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.296625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:21.278 [2024-10-17 10:17:24.296646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.805 ms 00:18:21.278 [2024-10-17 10:17:24.296654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.296849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.296861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:21.278 [2024-10-17 10:17:24.296871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:18:21.278 [2024-10-17 10:17:24.296878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.319573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.319679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:21.278 [2024-10-17 10:17:24.319697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.661 ms 00:18:21.278 [2024-10-17 10:17:24.319704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.342333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.342435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:21.278 [2024-10-17 10:17:24.342455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.576 ms 00:18:21.278 [2024-10-17 10:17:24.342463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.278 [2024-10-17 10:17:24.364846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.278 [2024-10-17 10:17:24.364877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.278 [2024-10-17 10:17:24.364889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:18:21.278 [2024-10-17 10:17:24.364896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.540 [2024-10-17 10:17:24.386719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.540 [2024-10-17 10:17:24.386748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.540 [2024-10-17 10:17:24.386761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.721 ms 00:18:21.540 [2024-10-17 10:17:24.386768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.540 [2024-10-17 10:17:24.386818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.540 [2024-10-17 10:17:24.386833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.386994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.540 [2024-10-17 10:17:24.387327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.541 [2024-10-17 10:17:24.387814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.541 [2024-10-17 10:17:24.387825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:21.541 [2024-10-17 10:17:24.387832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:21.541 [2024-10-17 10:17:24.387841] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:21.541 [2024-10-17 10:17:24.387847] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:21.541 [2024-10-17 10:17:24.387856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:21.541 [2024-10-17 10:17:24.387863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.541 [2024-10-17 10:17:24.387872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.541 [2024-10-17 10:17:24.387880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.541 [2024-10-17 10:17:24.387888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.541 [2024-10-17 10:17:24.387894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.541 [2024-10-17 10:17:24.387903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.541 [2024-10-17 10:17:24.387910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.541 [2024-10-17 10:17:24.387920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:18:21.541 [2024-10-17 10:17:24.387926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.400099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.541 [2024-10-17 10:17:24.400206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.541 [2024-10-17 10:17:24.400226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.144 ms 00:18:21.541 [2024-10-17 10:17:24.400235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.400603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.541 [2024-10-17 10:17:24.400613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.541 [2024-10-17 10:17:24.400623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:21.541 [2024-10-17 10:17:24.400630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.444127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.444161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.541 [2024-10-17 10:17:24.444173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.444183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.444281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.444290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.541 [2024-10-17 10:17:24.444300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.444307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.444371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.444379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.541 [2024-10-17 10:17:24.444391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.444398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.444430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.444437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.541 [2024-10-17 10:17:24.444446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.444453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.524196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.524236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.541 [2024-10-17 10:17:24.524248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.524258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.541 [2024-10-17 10:17:24.586005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.541 [2024-10-17 10:17:24.586044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.541 [2024-10-17 10:17:24.586076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.541 [2024-10-17 10:17:24.586084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.542 [2024-10-17 10:17:24.586226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.542 [2024-10-17 10:17:24.586307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.542 [2024-10-17 10:17:24.586434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.542 [2024-10-17 10:17:24.586507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.542 [2024-10-17 10:17:24.586580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.542 [2024-10-17 10:17:24.586647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.542 [2024-10-17 10:17:24.586657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.542 [2024-10-17 10:17:24.586664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.542 [2024-10-17 10:17:24.586827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.709 ms, result 0 00:18:21.542 true 00:18:21.542 10:17:24 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73674 00:18:21.542 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73674 ']' 00:18:21.542 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73674 00:18:21.542 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:21.542 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.542 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73674 00:18:21.815 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.815 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.815 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73674' 00:18:21.815 killing process with pid 73674 00:18:21.815 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73674 00:18:21.815 10:17:24 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73674 00:18:28.378 10:17:30 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:28.944 65536+0 records in 00:18:28.944 65536+0 records out 00:18:28.944 268435456 bytes (268 MB, 256 MiB) copied, 1.06709 s, 252 MB/s 00:18:28.944 10:17:31 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:29.202 [2024-10-17 10:17:32.040665] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:18:29.202 [2024-10-17 10:17:32.040920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73872 ] 00:18:29.202 [2024-10-17 10:17:32.190555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.202 [2024-10-17 10:17:32.285847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.460 [2024-10-17 10:17:32.536337] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.460 [2024-10-17 10:17:32.536396] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.719 [2024-10-17 10:17:32.690722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.690772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:29.719 [2024-10-17 10:17:32.690785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.719 [2024-10-17 10:17:32.690793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.693390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.693422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.719 [2024-10-17 10:17:32.693432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:18:29.719 [2024-10-17 10:17:32.693439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.693503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:29.719 [2024-10-17 10:17:32.694170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:29.719 [2024-10-17 10:17:32.694232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.694240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.719 [2024-10-17 10:17:32.694248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:18:29.719 [2024-10-17 10:17:32.694255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.695351] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:29.719 [2024-10-17 10:17:32.707149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.707182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:29.719 [2024-10-17 10:17:32.707193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:18:29.719 [2024-10-17 10:17:32.707205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.707285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.707296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:29.719 [2024-10-17 10:17:32.707304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:29.719 [2024-10-17 10:17:32.707311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.712130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.719 [2024-10-17 10:17:32.712161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.719 [2024-10-17 10:17:32.712170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:18:29.719 [2024-10-17 10:17:32.712177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.719 [2024-10-17 10:17:32.712257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.712266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.720 [2024-10-17 10:17:32.712274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:29.720 [2024-10-17 10:17:32.712281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.712304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.712312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:29.720 [2024-10-17 10:17:32.712319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:29.720 [2024-10-17 10:17:32.712329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.712348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:29.720 [2024-10-17 10:17:32.715761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.715787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.720 [2024-10-17 10:17:32.715796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:18:29.720 [2024-10-17 10:17:32.715804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.715836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.715845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:29.720 [2024-10-17 10:17:32.715853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:29.720 [2024-10-17 10:17:32.715860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.715878] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:29.720 [2024-10-17 10:17:32.715894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:29.720 [2024-10-17 10:17:32.715929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:29.720 [2024-10-17 10:17:32.715944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:29.720 [2024-10-17 10:17:32.716044] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:29.720 [2024-10-17 10:17:32.716070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:29.720 [2024-10-17 10:17:32.716080] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:29.720 [2024-10-17 10:17:32.716090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716099] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:29.720 [2024-10-17 10:17:32.716117] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:29.720 [2024-10-17 10:17:32.716124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:29.720 [2024-10-17 10:17:32.716131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:29.720 [2024-10-17 10:17:32.716138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.716147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:29.720 [2024-10-17 10:17:32.716155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:29.720 [2024-10-17 10:17:32.716162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.716249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.720 [2024-10-17 10:17:32.716258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:29.720 [2024-10-17 10:17:32.716265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:29.720 [2024-10-17 10:17:32.716274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.720 [2024-10-17 10:17:32.716382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:29.720 [2024-10-17 10:17:32.716398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:29.720 [2024-10-17 10:17:32.716406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:29.720 [2024-10-17 10:17:32.716428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:29.720 [2024-10-17 10:17:32.716448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.720 [2024-10-17 10:17:32.716461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:29.720 [2024-10-17 10:17:32.716467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:29.720 [2024-10-17 10:17:32.716474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.720 [2024-10-17 10:17:32.716486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:29.720 [2024-10-17 10:17:32.716493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:29.720 [2024-10-17 10:17:32.716499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:29.720 [2024-10-17 10:17:32.716513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:29.720 [2024-10-17 10:17:32.716532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:29.720 [2024-10-17 10:17:32.716551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:29.720 [2024-10-17 10:17:32.716570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:29.720 [2024-10-17 10:17:32.716589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:29.720 [2024-10-17 10:17:32.716608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.720 [2024-10-17 10:17:32.716621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:29.720 [2024-10-17 10:17:32.716627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:29.720 [2024-10-17 10:17:32.716633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.720 [2024-10-17 10:17:32.716640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:29.720 [2024-10-17 10:17:32.716646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:29.720 [2024-10-17 10:17:32.716652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:29.720 [2024-10-17 10:17:32.716665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:29.720 [2024-10-17 10:17:32.716671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:29.720 [2024-10-17 10:17:32.716685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:29.720 [2024-10-17 10:17:32.716692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.720 [2024-10-17 10:17:32.716705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:29.720 [2024-10-17 10:17:32.716714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:29.720 [2024-10-17 10:17:32.716721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:29.720 [2024-10-17 10:17:32.716727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:29.720 [2024-10-17 10:17:32.716734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:29.720 [2024-10-17 10:17:32.716740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:29.720 [2024-10-17 10:17:32.716748] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:29.720 [2024-10-17 10:17:32.716759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.720 [2024-10-17 10:17:32.716766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:29.720 [2024-10-17 10:17:32.716774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:29.720 [2024-10-17 10:17:32.716781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:29.720 [2024-10-17 10:17:32.716788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:29.720 [2024-10-17 10:17:32.716795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:29.720 [2024-10-17 10:17:32.716801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:29.720 [2024-10-17 10:17:32.716808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:29.720 [2024-10-17 10:17:32.716814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:29.720 [2024-10-17 10:17:32.716821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:29.720 [2024-10-17 10:17:32.716828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:29.720 [2024-10-17 10:17:32.716834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:29.721 [2024-10-17 10:17:32.716841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:29.721 [2024-10-17 10:17:32.716848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:29.721 [2024-10-17 10:17:32.716855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:29.721 [2024-10-17 10:17:32.716862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:29.721 [2024-10-17 10:17:32.716870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.721 [2024-10-17 10:17:32.716877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:29.721 [2024-10-17 10:17:32.716885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:29.721 [2024-10-17 10:17:32.716891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:29.721 [2024-10-17 10:17:32.716898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:29.721 [2024-10-17 10:17:32.716906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.716913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:29.721 [2024-10-17 10:17:32.716920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:18:29.721 [2024-10-17 10:17:32.716929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.742563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.742596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.721 [2024-10-17 10:17:32.742606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.587 ms 00:18:29.721 [2024-10-17 10:17:32.742614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.742723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.742733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.721 [2024-10-17 10:17:32.742741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:29.721 [2024-10-17 10:17:32.742751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.787945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.788105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.721 [2024-10-17 10:17:32.788123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.172 ms 00:18:29.721 [2024-10-17 10:17:32.788132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.788223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.788235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.721 [2024-10-17 10:17:32.788244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:29.721 [2024-10-17 10:17:32.788251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.788551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.788565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.721 [2024-10-17 10:17:32.788573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:29.721 [2024-10-17 10:17:32.788580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.788704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.788714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.721 [2024-10-17 10:17:32.788722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:29.721 [2024-10-17 10:17:32.788729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.721 [2024-10-17 10:17:32.801977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.721 [2024-10-17 10:17:32.802007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.721 [2024-10-17 10:17:32.802017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.229 ms 00:18:29.721 [2024-10-17 10:17:32.802024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.813961] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:29.979 [2024-10-17 10:17:32.813994] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:29.979 [2024-10-17 10:17:32.814006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.814014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:29.979 [2024-10-17 10:17:32.814022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:18:29.979 [2024-10-17 10:17:32.814029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.837854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.837886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:29.979 [2024-10-17 10:17:32.837903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:18:29.979 [2024-10-17 10:17:32.837911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.849172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.849200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:29.979 [2024-10-17 10:17:32.849209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.212 ms 00:18:29.979 [2024-10-17 10:17:32.849216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.860195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.860223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:29.979 [2024-10-17 10:17:32.860233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.923 ms 00:18:29.979 [2024-10-17 10:17:32.860240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.860829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.860853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.979 [2024-10-17 10:17:32.860864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:18:29.979 [2024-10-17 10:17:32.860870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.915787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.915975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:29.979 [2024-10-17 10:17:32.915999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.894 ms 00:18:29.979 [2024-10-17 10:17:32.916007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.926429] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:29.979 [2024-10-17 10:17:32.940542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.940578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.979 [2024-10-17 10:17:32.940591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.415 ms 00:18:29.979 [2024-10-17 10:17:32.940598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.940686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.940697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:29.979 [2024-10-17 10:17:32.940709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:29.979 [2024-10-17 10:17:32.940716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.940762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.940771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.979 [2024-10-17 10:17:32.940780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:29.979 [2024-10-17 10:17:32.940787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.940807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.940815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.979 [2024-10-17 10:17:32.940826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.979 [2024-10-17 10:17:32.940834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.940865] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:29.979 [2024-10-17 10:17:32.940874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.940882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:29.979 [2024-10-17 10:17:32.940889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:29.979 [2024-10-17 10:17:32.940896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.964180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.964332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.979 [2024-10-17 10:17:32.964355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.265 ms 00:18:29.979 [2024-10-17 10:17:32.964363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.964455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.979 [2024-10-17 10:17:32.964467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.979 [2024-10-17 10:17:32.964475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:29.979 [2024-10-17 10:17:32.964482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.979 [2024-10-17 10:17:32.965262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.979 [2024-10-17 10:17:32.968482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.232 ms, result 0 00:18:29.979 [2024-10-17 10:17:32.969569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:29.979 [2024-10-17 10:17:32.982444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:30.913  [2024-10-17T10:17:35.378Z] Copying: 30/256 [MB] (30 MBps) [2024-10-17T10:17:36.312Z] Copying: 60/256 [MB] (29 MBps) [2024-10-17T10:17:37.247Z] Copying: 84/256 [MB] (24 MBps) [2024-10-17T10:17:38.230Z] Copying: 124/256 [MB] (40 MBps) [2024-10-17T10:17:39.164Z] Copying: 166/256 [MB] (42 MBps) [2024-10-17T10:17:40.097Z] Copying: 209/256 [MB] (43 MBps) [2024-10-17T10:17:41.030Z] Copying: 230/256 [MB] (20 MBps) [2024-10-17T10:17:41.596Z] Copying: 242/256 [MB] (11 MBps) [2024-10-17T10:17:41.596Z] Copying: 256/256 [MB] (average 29 MBps)[2024-10-17 10:17:41.576280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.505 [2024-10-17 10:17:41.583627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.505 [2024-10-17 10:17:41.583774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.505 [2024-10-17 10:17:41.583792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:38.505 [2024-10-17 10:17:41.583799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.505 [2024-10-17 10:17:41.583821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:38.505 [2024-10-17 10:17:41.585908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.505 [2024-10-17 10:17:41.585927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.505 [2024-10-17 10:17:41.585941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:18:38.505 [2024-10-17 10:17:41.585948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.505 [2024-10-17 10:17:41.586940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.505 [2024-10-17 10:17:41.587034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.505 [2024-10-17 10:17:41.587064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:18:38.505 [2024-10-17 10:17:41.587072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.505 [2024-10-17 10:17:41.592810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.505 [2024-10-17 10:17:41.592834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:38.505 [2024-10-17 10:17:41.592842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:18:38.505 [2024-10-17 10:17:41.592848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.598525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.598546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:38.764 [2024-10-17 10:17:41.598553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.646 ms 00:18:38.764 [2024-10-17 10:17:41.598560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.616802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.616911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:38.764 [2024-10-17 10:17:41.616925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.197 ms 00:18:38.764 [2024-10-17 10:17:41.616932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.628231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.628259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:38.764 [2024-10-17 10:17:41.628268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.273 ms 00:18:38.764 [2024-10-17 10:17:41.628275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.628366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.628374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:38.764 [2024-10-17 10:17:41.628381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:38.764 [2024-10-17 10:17:41.628387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.646600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.646627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:38.764 [2024-10-17 10:17:41.646634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:18:38.764 [2024-10-17 10:17:41.646640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.664053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.664078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:38.764 [2024-10-17 10:17:41.664086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:18:38.764 [2024-10-17 10:17:41.664092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.681111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.681137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:38.764 [2024-10-17 10:17:41.681145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.992 ms 00:18:38.764 [2024-10-17 10:17:41.681151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.698614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.764 [2024-10-17 10:17:41.698640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:38.764 [2024-10-17 10:17:41.698648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.413 ms 00:18:38.764 [2024-10-17 10:17:41.698654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.764 [2024-10-17 10:17:41.698681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:38.764 [2024-10-17 10:17:41.698692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:38.764 [2024-10-17 10:17:41.698974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.698979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.698985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.698991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.698997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:38.765 [2024-10-17 10:17:41.699341] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:38.765 [2024-10-17 10:17:41.699348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:38.765 [2024-10-17 10:17:41.699355] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:38.765 [2024-10-17 10:17:41.699360] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:38.765 [2024-10-17 10:17:41.699366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:38.765 [2024-10-17 10:17:41.699372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:38.765 [2024-10-17 10:17:41.699378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:38.765 [2024-10-17 10:17:41.699384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:38.765 [2024-10-17 10:17:41.699389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:38.765 [2024-10-17 10:17:41.699394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:38.765 [2024-10-17 10:17:41.699400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:38.765 [2024-10-17 10:17:41.699405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.765 [2024-10-17 10:17:41.699411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:38.765 [2024-10-17 10:17:41.699418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:18:38.765 [2024-10-17 10:17:41.699423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.709139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.765 [2024-10-17 10:17:41.709162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:38.765 [2024-10-17 10:17:41.709171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.701 ms 00:18:38.765 [2024-10-17 10:17:41.709177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.709465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.765 [2024-10-17 10:17:41.709472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:38.765 [2024-10-17 10:17:41.709478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:18:38.765 [2024-10-17 10:17:41.709488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.737379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.737409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.765 [2024-10-17 10:17:41.737417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.737423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.737497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.737505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.765 [2024-10-17 10:17:41.737511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.737520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.737555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.737563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.765 [2024-10-17 10:17:41.737569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.737574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.737588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.737594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.765 [2024-10-17 10:17:41.737599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.737605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.797554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.797700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.765 [2024-10-17 10:17:41.797714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.797720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.846150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.846186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.765 [2024-10-17 10:17:41.846195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.846205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.846260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.846267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.765 [2024-10-17 10:17:41.846274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.846280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.846302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.765 [2024-10-17 10:17:41.846308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.765 [2024-10-17 10:17:41.846315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.765 [2024-10-17 10:17:41.846321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.765 [2024-10-17 10:17:41.846390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.766 [2024-10-17 10:17:41.846398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.766 [2024-10-17 10:17:41.846404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.766 [2024-10-17 10:17:41.846410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.766 [2024-10-17 10:17:41.846433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.766 [2024-10-17 10:17:41.846440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.766 [2024-10-17 10:17:41.846446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.766 [2024-10-17 10:17:41.846452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.766 [2024-10-17 10:17:41.846483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.766 [2024-10-17 10:17:41.846490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.766 [2024-10-17 10:17:41.846496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.766 [2024-10-17 10:17:41.846502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.766 [2024-10-17 10:17:41.846535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.766 [2024-10-17 10:17:41.846543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.766 [2024-10-17 10:17:41.846549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.766 [2024-10-17 10:17:41.846555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.766 [2024-10-17 10:17:41.846664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.029 ms, result 0 00:18:39.699 00:18:39.699 00:18:39.699 10:17:42 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:39.699 10:17:42 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73991 00:18:39.699 10:17:42 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73991 00:18:39.699 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73991 ']' 00:18:39.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.699 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.699 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.699 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.700 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.700 10:17:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 [2024-10-17 10:17:42.772980] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:18:39.700 [2024-10-17 10:17:42.773123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:18:39.957 [2024-10-17 10:17:42.921481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.957 [2024-10-17 10:17:43.003321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.541 10:17:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.541 10:17:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:40.541 10:17:43 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:40.800 [2024-10-17 10:17:43.818746] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.800 [2024-10-17 10:17:43.818797] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:41.061 [2024-10-17 10:17:43.985300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:43.985342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.061 [2024-10-17 10:17:43.985354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.061 [2024-10-17 10:17:43.985361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:43.987507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:43.987537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.061 [2024-10-17 10:17:43.987547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:18:41.061 [2024-10-17 10:17:43.987553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:43.987610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.061 [2024-10-17 10:17:43.988173] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.061 [2024-10-17 10:17:43.988302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:43.988311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.061 [2024-10-17 10:17:43.988320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:18:41.061 [2024-10-17 10:17:43.988326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:43.989353] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:41.061 [2024-10-17 10:17:43.999032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:43.999069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:41.061 [2024-10-17 10:17:43.999079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.684 ms 00:18:41.061 [2024-10-17 10:17:43.999087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:43.999158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:43.999168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:41.061 [2024-10-17 10:17:43.999175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:41.061 [2024-10-17 10:17:43.999182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.003861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.003890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.061 [2024-10-17 10:17:44.003898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.641 ms 00:18:41.061 [2024-10-17 10:17:44.003905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.003983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.003993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.061 [2024-10-17 10:17:44.003999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:41.061 [2024-10-17 10:17:44.004006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.004025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.004036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.061 [2024-10-17 10:17:44.004041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:41.061 [2024-10-17 10:17:44.004057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.004074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:41.061 [2024-10-17 10:17:44.006833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.006962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.061 [2024-10-17 10:17:44.006977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:18:41.061 [2024-10-17 10:17:44.006986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.007020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.007027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.061 [2024-10-17 10:17:44.007035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:41.061 [2024-10-17 10:17:44.007045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.007082] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:41.061 [2024-10-17 10:17:44.007098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:41.061 [2024-10-17 10:17:44.007132] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:41.061 [2024-10-17 10:17:44.007147] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:41.061 [2024-10-17 10:17:44.007237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:41.061 [2024-10-17 10:17:44.007245] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.061 [2024-10-17 10:17:44.007255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:41.061 [2024-10-17 10:17:44.007263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:41.061 [2024-10-17 10:17:44.007286] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.061 [2024-10-17 10:17:44.007291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:41.061 [2024-10-17 10:17:44.007299] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:41.061 [2024-10-17 10:17:44.007306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.007312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.061 [2024-10-17 10:17:44.007318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:41.061 [2024-10-17 10:17:44.007325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.007398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.061 [2024-10-17 10:17:44.007406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.061 [2024-10-17 10:17:44.007413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:41.061 [2024-10-17 10:17:44.007420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.061 [2024-10-17 10:17:44.007498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.061 [2024-10-17 10:17:44.007506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.061 [2024-10-17 10:17:44.007513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.061 [2024-10-17 10:17:44.007532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.061 [2024-10-17 10:17:44.007553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.061 [2024-10-17 10:17:44.007564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.061 [2024-10-17 10:17:44.007571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:41.061 [2024-10-17 10:17:44.007576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.061 [2024-10-17 10:17:44.007583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.061 [2024-10-17 10:17:44.007590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:41.061 [2024-10-17 10:17:44.007597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.061 [2024-10-17 10:17:44.007609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.061 [2024-10-17 10:17:44.007631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.061 [2024-10-17 10:17:44.007650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.061 [2024-10-17 10:17:44.007667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.061 [2024-10-17 10:17:44.007685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.061 [2024-10-17 10:17:44.007697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.061 [2024-10-17 10:17:44.007702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.061 [2024-10-17 10:17:44.007713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.061 [2024-10-17 10:17:44.007720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:41.061 [2024-10-17 10:17:44.007725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.061 [2024-10-17 10:17:44.007731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:41.061 [2024-10-17 10:17:44.007737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:41.061 [2024-10-17 10:17:44.007744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:41.061 [2024-10-17 10:17:44.007756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:41.061 [2024-10-17 10:17:44.007760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.061 [2024-10-17 10:17:44.007767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.061 [2024-10-17 10:17:44.007773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.061 [2024-10-17 10:17:44.007780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.062 [2024-10-17 10:17:44.007788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.062 [2024-10-17 10:17:44.007795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.062 [2024-10-17 10:17:44.007800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.062 [2024-10-17 10:17:44.007807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.062 [2024-10-17 10:17:44.007813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.062 [2024-10-17 10:17:44.007819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.062 [2024-10-17 10:17:44.007824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.062 [2024-10-17 10:17:44.007832] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.062 [2024-10-17 10:17:44.007839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:41.062 [2024-10-17 10:17:44.007855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:41.062 [2024-10-17 10:17:44.007862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:41.062 [2024-10-17 10:17:44.007868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:41.062 [2024-10-17 10:17:44.007875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:41.062 [2024-10-17 10:17:44.007881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:41.062 [2024-10-17 10:17:44.007888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:41.062 [2024-10-17 10:17:44.007893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:41.062 [2024-10-17 10:17:44.007900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:41.062 [2024-10-17 10:17:44.007905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:41.062 [2024-10-17 10:17:44.007936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.062 [2024-10-17 10:17:44.007943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.062 [2024-10-17 10:17:44.007957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.062 [2024-10-17 10:17:44.007964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.062 [2024-10-17 10:17:44.007969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.062 [2024-10-17 10:17:44.007976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.007982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.062 [2024-10-17 10:17:44.007990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:18:41.062 [2024-10-17 10:17:44.007996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.029564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.029676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.062 [2024-10-17 10:17:44.029691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.503 ms 00:18:41.062 [2024-10-17 10:17:44.029697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.029796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.029806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:41.062 [2024-10-17 10:17:44.029814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:41.062 [2024-10-17 10:17:44.029819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.054528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.054554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.062 [2024-10-17 10:17:44.054564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.690 ms 00:18:41.062 [2024-10-17 10:17:44.054572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.054619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.054626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.062 [2024-10-17 10:17:44.054635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:41.062 [2024-10-17 10:17:44.054641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.054934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.054945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.062 [2024-10-17 10:17:44.054954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:41.062 [2024-10-17 10:17:44.054959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.055080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.055088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.062 [2024-10-17 10:17:44.055095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:41.062 [2024-10-17 10:17:44.055101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.067130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.067155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.062 [2024-10-17 10:17:44.067164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.011 ms 00:18:41.062 [2024-10-17 10:17:44.067170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.076912] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:41.062 [2024-10-17 10:17:44.077025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:41.062 [2024-10-17 10:17:44.077040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.077063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:41.062 [2024-10-17 10:17:44.077073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.790 ms 00:18:41.062 [2024-10-17 10:17:44.077079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.095835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.095934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:41.062 [2024-10-17 10:17:44.095950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.700 ms 00:18:41.062 [2024-10-17 10:17:44.095956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.104750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.104774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:41.062 [2024-10-17 10:17:44.104784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.742 ms 00:18:41.062 [2024-10-17 10:17:44.104790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.113256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.113280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:41.062 [2024-10-17 10:17:44.113290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.424 ms 00:18:41.062 [2024-10-17 10:17:44.113295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.062 [2024-10-17 10:17:44.113763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.062 [2024-10-17 10:17:44.113787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:41.062 [2024-10-17 10:17:44.113796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:18:41.062 [2024-10-17 10:17:44.113802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.321 [2024-10-17 10:17:44.172608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.321 [2024-10-17 10:17:44.172765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:41.321 [2024-10-17 10:17:44.172786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.784 ms 00:18:41.321 [2024-10-17 10:17:44.172793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.321 [2024-10-17 10:17:44.181149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:41.321 [2024-10-17 10:17:44.192953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.321 [2024-10-17 10:17:44.192989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.321 [2024-10-17 10:17:44.193001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.823 ms 00:18:41.321 [2024-10-17 10:17:44.193009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.321 [2024-10-17 10:17:44.193104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.321 [2024-10-17 10:17:44.193115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:41.321 [2024-10-17 10:17:44.193123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:41.321 [2024-10-17 10:17:44.193130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.321 [2024-10-17 10:17:44.193170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.321 [2024-10-17 10:17:44.193179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:41.321 [2024-10-17 10:17:44.193185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:41.321 [2024-10-17 10:17:44.193192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.321 [2024-10-17 10:17:44.193212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.321 [2024-10-17 10:17:44.193221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:41.321 [2024-10-17 10:17:44.193227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.321 [2024-10-17 10:17:44.193236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.322 [2024-10-17 10:17:44.193260] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:41.322 [2024-10-17 10:17:44.193270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.322 [2024-10-17 10:17:44.193276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:41.322 [2024-10-17 10:17:44.193283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:41.322 [2024-10-17 10:17:44.193290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.322 [2024-10-17 10:17:44.211524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.322 [2024-10-17 10:17:44.211642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:41.322 [2024-10-17 10:17:44.211659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:18:41.322 [2024-10-17 10:17:44.211666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.322 [2024-10-17 10:17:44.211737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.322 [2024-10-17 10:17:44.211746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:41.322 [2024-10-17 10:17:44.211754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:41.322 [2024-10-17 10:17:44.211760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.322 [2024-10-17 10:17:44.212437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.322 [2024-10-17 10:17:44.214709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.911 ms, result 0 00:18:41.322 [2024-10-17 10:17:44.215448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.322 Some configs were skipped because the RPC state that can call them passed over. 00:18:41.322 10:17:44 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:41.581 [2024-10-17 10:17:44.443662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.581 [2024-10-17 10:17:44.443786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:41.581 [2024-10-17 10:17:44.444104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:18:41.581 [2024-10-17 10:17:44.444170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.581 [2024-10-17 10:17:44.444231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.718 ms, result 0 00:18:41.581 true 00:18:41.581 10:17:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:41.581 [2024-10-17 10:17:44.644181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.581 [2024-10-17 10:17:44.644304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:41.581 [2024-10-17 10:17:44.644353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:18:41.581 [2024-10-17 10:17:44.644371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.581 [2024-10-17 10:17:44.644417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.716 ms, result 0 00:18:41.581 true 00:18:41.581 10:17:44 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73991 00:18:41.581 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73991 ']' 00:18:41.581 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73991 00:18:41.581 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:41.581 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.581 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73991 00:18:41.840 killing process with pid 73991 00:18:41.840 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.840 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.840 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73991' 00:18:41.840 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73991 00:18:41.840 10:17:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73991 00:18:42.408 [2024-10-17 10:17:45.229342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.229393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:42.408 [2024-10-17 10:17:45.229404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:42.408 [2024-10-17 10:17:45.229411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.229429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:42.408 [2024-10-17 10:17:45.231603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.231629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:42.408 [2024-10-17 10:17:45.231642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:18:42.408 [2024-10-17 10:17:45.231649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.231874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.231882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:42.408 [2024-10-17 10:17:45.231890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:42.408 [2024-10-17 10:17:45.231896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.235009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.235035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:42.408 [2024-10-17 10:17:45.235044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:18:42.408 [2024-10-17 10:17:45.235060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.240404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.240430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:42.408 [2024-10-17 10:17:45.240442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.314 ms 00:18:42.408 [2024-10-17 10:17:45.240448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.247899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.247925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:42.408 [2024-10-17 10:17:45.247936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.408 ms 00:18:42.408 [2024-10-17 10:17:45.247946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.254377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.254403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:42.408 [2024-10-17 10:17:45.254412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.399 ms 00:18:42.408 [2024-10-17 10:17:45.254421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.254529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.254537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:42.408 [2024-10-17 10:17:45.254545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:42.408 [2024-10-17 10:17:45.254551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.262160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.262184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:42.408 [2024-10-17 10:17:45.262193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.592 ms 00:18:42.408 [2024-10-17 10:17:45.262198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.269767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.269795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:42.408 [2024-10-17 10:17:45.269805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.539 ms 00:18:42.408 [2024-10-17 10:17:45.269811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.276809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.276836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:42.408 [2024-10-17 10:17:45.276845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.967 ms 00:18:42.408 [2024-10-17 10:17:45.276850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.283793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.408 [2024-10-17 10:17:45.283818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:42.408 [2024-10-17 10:17:45.283826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.893 ms 00:18:42.408 [2024-10-17 10:17:45.283832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.408 [2024-10-17 10:17:45.283859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:42.408 [2024-10-17 10:17:45.283870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:42.408 [2024-10-17 10:17:45.283934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.283998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:42.409 [2024-10-17 10:17:45.284497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:42.410 [2024-10-17 10:17:45.284543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:42.410 [2024-10-17 10:17:45.284552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:42.410 [2024-10-17 10:17:45.284562] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:42.410 [2024-10-17 10:17:45.284571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:42.410 [2024-10-17 10:17:45.284578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:42.410 [2024-10-17 10:17:45.284586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:42.410 [2024-10-17 10:17:45.284592] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:42.410 [2024-10-17 10:17:45.284599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:42.410 [2024-10-17 10:17:45.284605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:42.410 [2024-10-17 10:17:45.284611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:42.410 [2024-10-17 10:17:45.284616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:42.410 [2024-10-17 10:17:45.284623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.410 [2024-10-17 10:17:45.284628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:42.410 [2024-10-17 10:17:45.284636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:18:42.410 [2024-10-17 10:17:45.284641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.294425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.410 [2024-10-17 10:17:45.294450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:42.410 [2024-10-17 10:17:45.294462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.768 ms 00:18:42.410 [2024-10-17 10:17:45.294468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.294760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.410 [2024-10-17 10:17:45.294768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:42.410 [2024-10-17 10:17:45.294776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:18:42.410 [2024-10-17 10:17:45.294782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.330019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.330055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.410 [2024-10-17 10:17:45.330065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.330072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.330148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.330156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.410 [2024-10-17 10:17:45.330164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.330170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.330205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.330212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.410 [2024-10-17 10:17:45.330221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.330227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.330243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.330249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.410 [2024-10-17 10:17:45.330256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.330262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.389910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.389941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.410 [2024-10-17 10:17:45.389951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.389958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.410 [2024-10-17 10:17:45.439412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.410 [2024-10-17 10:17:45.439503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.410 [2024-10-17 10:17:45.439548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.410 [2024-10-17 10:17:45.439641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.410 [2024-10-17 10:17:45.439690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.410 [2024-10-17 10:17:45.439742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.410 [2024-10-17 10:17:45.439791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.410 [2024-10-17 10:17:45.439798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.410 [2024-10-17 10:17:45.439804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.410 [2024-10-17 10:17:45.439915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.555 ms, result 0 00:18:42.978 10:17:45 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:42.978 10:17:45 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:42.978 [2024-10-17 10:17:46.064484] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:18:42.978 [2024-10-17 10:17:46.064608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74038 ] 00:18:43.236 [2024-10-17 10:17:46.213029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.236 [2024-10-17 10:17:46.312354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.494 [2024-10-17 10:17:46.523943] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.494 [2024-10-17 10:17:46.523996] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.754 [2024-10-17 10:17:46.671918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.671969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.754 [2024-10-17 10:17:46.671980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.754 [2024-10-17 10:17:46.671986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.674141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.674170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.754 [2024-10-17 10:17:46.674177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:18:43.754 [2024-10-17 10:17:46.674183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.674242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.754 [2024-10-17 10:17:46.674807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.754 [2024-10-17 10:17:46.674827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.674834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.754 [2024-10-17 10:17:46.674841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:18:43.754 [2024-10-17 10:17:46.674847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.675864] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.754 [2024-10-17 10:17:46.685482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.685512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.754 [2024-10-17 10:17:46.685521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.619 ms 00:18:43.754 [2024-10-17 10:17:46.685530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.685657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.685672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.754 [2024-10-17 10:17:46.685680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:43.754 [2024-10-17 10:17:46.685686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.690174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.690201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.754 [2024-10-17 10:17:46.690209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:18:43.754 [2024-10-17 10:17:46.690216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.690292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.690300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.754 [2024-10-17 10:17:46.690307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:43.754 [2024-10-17 10:17:46.690313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.690332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.690338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.754 [2024-10-17 10:17:46.690344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:43.754 [2024-10-17 10:17:46.690352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.690371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:43.754 [2024-10-17 10:17:46.693039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.693070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.754 [2024-10-17 10:17:46.693078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:18:43.754 [2024-10-17 10:17:46.693084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.693113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.693120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.754 [2024-10-17 10:17:46.693126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:43.754 [2024-10-17 10:17:46.693132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.754 [2024-10-17 10:17:46.693146] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.754 [2024-10-17 10:17:46.693161] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.754 [2024-10-17 10:17:46.693191] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.754 [2024-10-17 10:17:46.693204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:43.754 [2024-10-17 10:17:46.693287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.754 [2024-10-17 10:17:46.693296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.754 [2024-10-17 10:17:46.693305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.754 [2024-10-17 10:17:46.693313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.754 [2024-10-17 10:17:46.693320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.754 [2024-10-17 10:17:46.693326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:43.754 [2024-10-17 10:17:46.693334] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.754 [2024-10-17 10:17:46.693340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.754 [2024-10-17 10:17:46.693346] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.754 [2024-10-17 10:17:46.693352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.754 [2024-10-17 10:17:46.693358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.754 [2024-10-17 10:17:46.693364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:18:43.755 [2024-10-17 10:17:46.693370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.755 [2024-10-17 10:17:46.693441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.755 [2024-10-17 10:17:46.693448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.755 [2024-10-17 10:17:46.693454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:43.755 [2024-10-17 10:17:46.693462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.755 [2024-10-17 10:17:46.693542] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.755 [2024-10-17 10:17:46.693549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.755 [2024-10-17 10:17:46.693556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.755 [2024-10-17 10:17:46.693574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.755 [2024-10-17 10:17:46.693590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.755 [2024-10-17 10:17:46.693600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.755 [2024-10-17 10:17:46.693606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:43.755 [2024-10-17 10:17:46.693611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.755 [2024-10-17 10:17:46.693622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.755 [2024-10-17 10:17:46.693629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:43.755 [2024-10-17 10:17:46.693634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.755 [2024-10-17 10:17:46.693644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.755 [2024-10-17 10:17:46.693660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.755 [2024-10-17 10:17:46.693675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.755 [2024-10-17 10:17:46.693691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.755 [2024-10-17 10:17:46.693706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.755 [2024-10-17 10:17:46.693721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.755 [2024-10-17 10:17:46.693732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.755 [2024-10-17 10:17:46.693737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:43.755 [2024-10-17 10:17:46.693742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.755 [2024-10-17 10:17:46.693747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.755 [2024-10-17 10:17:46.693752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:43.755 [2024-10-17 10:17:46.693757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.755 [2024-10-17 10:17:46.693769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:43.755 [2024-10-17 10:17:46.693774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693779] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.755 [2024-10-17 10:17:46.693785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.755 [2024-10-17 10:17:46.693790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.755 [2024-10-17 10:17:46.693803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.755 [2024-10-17 10:17:46.693808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.755 [2024-10-17 10:17:46.693813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.755 [2024-10-17 10:17:46.693819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.755 [2024-10-17 10:17:46.693824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.755 [2024-10-17 10:17:46.693830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.755 [2024-10-17 10:17:46.693836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.755 [2024-10-17 10:17:46.693845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:43.755 [2024-10-17 10:17:46.693858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:43.755 [2024-10-17 10:17:46.693864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:43.755 [2024-10-17 10:17:46.693870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:43.755 [2024-10-17 10:17:46.693875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:43.755 [2024-10-17 10:17:46.693881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:43.755 [2024-10-17 10:17:46.693886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:43.755 [2024-10-17 10:17:46.693892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:43.755 [2024-10-17 10:17:46.693897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:43.755 [2024-10-17 10:17:46.693902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:43.755 [2024-10-17 10:17:46.693930] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.755 [2024-10-17 10:17:46.693936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.755 [2024-10-17 10:17:46.693948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.755 [2024-10-17 10:17:46.693954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.755 [2024-10-17 10:17:46.693959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.755 [2024-10-17 10:17:46.693965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.755 [2024-10-17 10:17:46.693971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.755 [2024-10-17 10:17:46.693977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:18:43.755 [2024-10-17 10:17:46.693987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.755 [2024-10-17 10:17:46.715359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.755 [2024-10-17 10:17:46.715389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.755 [2024-10-17 10:17:46.715398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.331 ms 00:18:43.755 [2024-10-17 10:17:46.715403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.755 [2024-10-17 10:17:46.715501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.755 [2024-10-17 10:17:46.715509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.755 [2024-10-17 10:17:46.715515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:43.755 [2024-10-17 10:17:46.715524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.755 [2024-10-17 10:17:46.763741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.763780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.756 [2024-10-17 10:17:46.763790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.200 ms 00:18:43.756 [2024-10-17 10:17:46.763796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.763881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.763890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.756 [2024-10-17 10:17:46.763897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.756 [2024-10-17 10:17:46.763903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.764209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.764221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.756 [2024-10-17 10:17:46.764228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:43.756 [2024-10-17 10:17:46.764234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.764340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.764350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.756 [2024-10-17 10:17:46.764357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:43.756 [2024-10-17 10:17:46.764363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.775362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.775389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.756 [2024-10-17 10:17:46.775397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.983 ms 00:18:43.756 [2024-10-17 10:17:46.775403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.785200] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:43.756 [2024-10-17 10:17:46.785229] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.756 [2024-10-17 10:17:46.785239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.785245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.756 [2024-10-17 10:17:46.785252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.754 ms 00:18:43.756 [2024-10-17 10:17:46.785258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.804399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.804434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.756 [2024-10-17 10:17:46.804444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:18:43.756 [2024-10-17 10:17:46.804451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.813123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.813150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.756 [2024-10-17 10:17:46.813157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.623 ms 00:18:43.756 [2024-10-17 10:17:46.813163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.821744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.821769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.756 [2024-10-17 10:17:46.821776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:18:43.756 [2024-10-17 10:17:46.821782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.756 [2024-10-17 10:17:46.822284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.756 [2024-10-17 10:17:46.822308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.756 [2024-10-17 10:17:46.822316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:18:43.756 [2024-10-17 10:17:46.822322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.866740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.866784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:44.015 [2024-10-17 10:17:46.866795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.399 ms 00:18:44.015 [2024-10-17 10:17:46.866801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.875056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:44.015 [2024-10-17 10:17:46.887426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.887457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.015 [2024-10-17 10:17:46.887467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.537 ms 00:18:44.015 [2024-10-17 10:17:46.887474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.887553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.887564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:44.015 [2024-10-17 10:17:46.887572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:44.015 [2024-10-17 10:17:46.887578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.887617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.887624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.015 [2024-10-17 10:17:46.887630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:44.015 [2024-10-17 10:17:46.887636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.887655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.887664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.015 [2024-10-17 10:17:46.887672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.015 [2024-10-17 10:17:46.887678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.887703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:44.015 [2024-10-17 10:17:46.887711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.887717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:44.015 [2024-10-17 10:17:46.887723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:44.015 [2024-10-17 10:17:46.887728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.906066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.906111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.015 [2024-10-17 10:17:46.906120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.321 ms 00:18:44.015 [2024-10-17 10:17:46.906127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.906203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.015 [2024-10-17 10:17:46.906212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.015 [2024-10-17 10:17:46.906219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:44.015 [2024-10-17 10:17:46.906225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.015 [2024-10-17 10:17:46.906874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.015 [2024-10-17 10:17:46.909520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.718 ms, result 0 00:18:44.015 [2024-10-17 10:17:46.910539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.015 [2024-10-17 10:17:46.921979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.958  [2024-10-17T10:17:48.985Z] Copying: 25/256 [MB] (25 MBps) [2024-10-17T10:17:49.929Z] Copying: 46/256 [MB] (21 MBps) [2024-10-17T10:17:51.303Z] Copying: 69/256 [MB] (22 MBps) [2024-10-17T10:17:52.237Z] Copying: 87/256 [MB] (18 MBps) [2024-10-17T10:17:53.169Z] Copying: 106/256 [MB] (18 MBps) [2024-10-17T10:17:54.104Z] Copying: 130/256 [MB] (24 MBps) [2024-10-17T10:17:55.064Z] Copying: 149/256 [MB] (19 MBps) [2024-10-17T10:17:55.998Z] Copying: 168/256 [MB] (18 MBps) [2024-10-17T10:17:56.932Z] Copying: 189/256 [MB] (21 MBps) [2024-10-17T10:17:58.308Z] Copying: 210/256 [MB] (20 MBps) [2024-10-17T10:17:59.245Z] Copying: 223/256 [MB] (13 MBps) [2024-10-17T10:17:59.813Z] Copying: 247/256 [MB] (23 MBps) [2024-10-17T10:17:59.813Z] Copying: 256/256 [MB] (average 20 MBps)[2024-10-17 10:17:59.632156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:56.722 [2024-10-17 10:17:59.641643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.641681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:56.722 [2024-10-17 10:17:59.641695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:56.722 [2024-10-17 10:17:59.641703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.641724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:56.722 [2024-10-17 10:17:59.644314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.644349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:56.722 [2024-10-17 10:17:59.644359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:18:56.722 [2024-10-17 10:17:59.644368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.644618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.644627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:56.722 [2024-10-17 10:17:59.644634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:18:56.722 [2024-10-17 10:17:59.644641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.648336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.648358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:56.722 [2024-10-17 10:17:59.648367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.680 ms 00:18:56.722 [2024-10-17 10:17:59.648379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.655344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.655372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:56.722 [2024-10-17 10:17:59.655382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.948 ms 00:18:56.722 [2024-10-17 10:17:59.655391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.678682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.678716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:56.722 [2024-10-17 10:17:59.678726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.238 ms 00:18:56.722 [2024-10-17 10:17:59.678733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.693313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.693344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:56.722 [2024-10-17 10:17:59.693356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:18:56.722 [2024-10-17 10:17:59.693368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.693486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.693495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:56.722 [2024-10-17 10:17:59.693504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:56.722 [2024-10-17 10:17:59.693511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.718739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.718777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:56.722 [2024-10-17 10:17:59.718790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.204 ms 00:18:56.722 [2024-10-17 10:17:59.718797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.722 [2024-10-17 10:17:59.742340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.722 [2024-10-17 10:17:59.742375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:56.722 [2024-10-17 10:17:59.742387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.491 ms 00:18:56.722 [2024-10-17 10:17:59.742395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.723 [2024-10-17 10:17:59.765458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.723 [2024-10-17 10:17:59.765492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:56.723 [2024-10-17 10:17:59.765502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.029 ms 00:18:56.723 [2024-10-17 10:17:59.765510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.723 [2024-10-17 10:17:59.788540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.723 [2024-10-17 10:17:59.788573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:56.723 [2024-10-17 10:17:59.788584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.971 ms 00:18:56.723 [2024-10-17 10:17:59.788592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.723 [2024-10-17 10:17:59.788623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:56.723 [2024-10-17 10:17:59.788637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.788997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:56.723 [2024-10-17 10:17:59.789254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:56.724 [2024-10-17 10:17:59.789391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:56.724 [2024-10-17 10:17:59.789399] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:56.724 [2024-10-17 10:17:59.789407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:56.724 [2024-10-17 10:17:59.789414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:56.724 [2024-10-17 10:17:59.789421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:56.724 [2024-10-17 10:17:59.789428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:56.724 [2024-10-17 10:17:59.789435] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:56.724 [2024-10-17 10:17:59.789442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:56.724 [2024-10-17 10:17:59.789449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:56.724 [2024-10-17 10:17:59.789455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:56.724 [2024-10-17 10:17:59.789461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:56.724 [2024-10-17 10:17:59.789468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.724 [2024-10-17 10:17:59.789475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:56.724 [2024-10-17 10:17:59.789483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:18:56.724 [2024-10-17 10:17:59.789493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.724 [2024-10-17 10:17:59.801728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.724 [2024-10-17 10:17:59.801758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:56.724 [2024-10-17 10:17:59.801768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.208 ms 00:18:56.724 [2024-10-17 10:17:59.801776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.724 [2024-10-17 10:17:59.802143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.724 [2024-10-17 10:17:59.802154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:56.724 [2024-10-17 10:17:59.802167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:56.724 [2024-10-17 10:17:59.802174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.836935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.836970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.982 [2024-10-17 10:17:59.836979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.836986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.837068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.837077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.982 [2024-10-17 10:17:59.837087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.837095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.837136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.837145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.982 [2024-10-17 10:17:59.837152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.837159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.837176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.837183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.982 [2024-10-17 10:17:59.837190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.837199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.914517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.914564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.982 [2024-10-17 10:17:59.914575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.914582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.977886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.977936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.982 [2024-10-17 10:17:59.977951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.977958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.978013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.978022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.982 [2024-10-17 10:17:59.978029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.978037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.978079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.982 [2024-10-17 10:17:59.978088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.982 [2024-10-17 10:17:59.978096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.982 [2024-10-17 10:17:59.978111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.982 [2024-10-17 10:17:59.978198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.983 [2024-10-17 10:17:59.978207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.983 [2024-10-17 10:17:59.978215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.983 [2024-10-17 10:17:59.978223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.983 [2024-10-17 10:17:59.978254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.983 [2024-10-17 10:17:59.978263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:56.983 [2024-10-17 10:17:59.978270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.983 [2024-10-17 10:17:59.978277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.983 [2024-10-17 10:17:59.978315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.983 [2024-10-17 10:17:59.978324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.983 [2024-10-17 10:17:59.978332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.983 [2024-10-17 10:17:59.978339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.983 [2024-10-17 10:17:59.978381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.983 [2024-10-17 10:17:59.978390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.983 [2024-10-17 10:17:59.978397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.983 [2024-10-17 10:17:59.978404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.983 [2024-10-17 10:17:59.978532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.883 ms, result 0 00:18:57.549 00:18:57.549 00:18:57.807 10:18:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:57.807 10:18:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:58.375 10:18:01 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:58.375 [2024-10-17 10:18:01.268988] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:18:58.375 [2024-10-17 10:18:01.269124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74198 ] 00:18:58.375 [2024-10-17 10:18:01.418553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.634 [2024-10-17 10:18:01.515411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.893 [2024-10-17 10:18:01.767223] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:58.893 [2024-10-17 10:18:01.767273] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:58.893 [2024-10-17 10:18:01.926351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.926394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:58.893 [2024-10-17 10:18:01.926407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:58.893 [2024-10-17 10:18:01.926414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.929073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.929105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.893 [2024-10-17 10:18:01.929114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:18:58.893 [2024-10-17 10:18:01.929121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.929189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:58.893 [2024-10-17 10:18:01.929889] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:58.893 [2024-10-17 10:18:01.929912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.929919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.893 [2024-10-17 10:18:01.929928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:18:58.893 [2024-10-17 10:18:01.929935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.931027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:58.893 [2024-10-17 10:18:01.943681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.943716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:58.893 [2024-10-17 10:18:01.943727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.655 ms 00:18:58.893 [2024-10-17 10:18:01.943739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.943818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.943829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:58.893 [2024-10-17 10:18:01.943837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:58.893 [2024-10-17 10:18:01.943844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.948673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.948706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.893 [2024-10-17 10:18:01.948716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:18:58.893 [2024-10-17 10:18:01.948725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.948809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.948818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.893 [2024-10-17 10:18:01.948826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:58.893 [2024-10-17 10:18:01.948834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.948856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.948864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:58.893 [2024-10-17 10:18:01.948871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:58.893 [2024-10-17 10:18:01.948881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.948900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:58.893 [2024-10-17 10:18:01.952097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.952123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.893 [2024-10-17 10:18:01.952132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:18:58.893 [2024-10-17 10:18:01.952139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.952174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.952182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:58.893 [2024-10-17 10:18:01.952190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:58.893 [2024-10-17 10:18:01.952197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.952215] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:58.893 [2024-10-17 10:18:01.952232] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:58.893 [2024-10-17 10:18:01.952268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:58.893 [2024-10-17 10:18:01.952282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:58.893 [2024-10-17 10:18:01.952384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:58.893 [2024-10-17 10:18:01.952394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:58.893 [2024-10-17 10:18:01.952404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:58.893 [2024-10-17 10:18:01.952413] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:58.893 [2024-10-17 10:18:01.952422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:58.893 [2024-10-17 10:18:01.952429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:58.893 [2024-10-17 10:18:01.952439] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:58.893 [2024-10-17 10:18:01.952446] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:58.893 [2024-10-17 10:18:01.952453] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:58.893 [2024-10-17 10:18:01.952461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.952468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:58.893 [2024-10-17 10:18:01.952475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:58.893 [2024-10-17 10:18:01.952481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.893 [2024-10-17 10:18:01.952568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.893 [2024-10-17 10:18:01.952576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:58.893 [2024-10-17 10:18:01.952583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:58.894 [2024-10-17 10:18:01.952592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.894 [2024-10-17 10:18:01.952689] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:58.894 [2024-10-17 10:18:01.952716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:58.894 [2024-10-17 10:18:01.952724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:58.894 [2024-10-17 10:18:01.952746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:58.894 [2024-10-17 10:18:01.952766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.894 [2024-10-17 10:18:01.952780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:58.894 [2024-10-17 10:18:01.952786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:58.894 [2024-10-17 10:18:01.952793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.894 [2024-10-17 10:18:01.952807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:58.894 [2024-10-17 10:18:01.952814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:58.894 [2024-10-17 10:18:01.952820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:58.894 [2024-10-17 10:18:01.952833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:58.894 [2024-10-17 10:18:01.952853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:58.894 [2024-10-17 10:18:01.952872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:58.894 [2024-10-17 10:18:01.952891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:58.894 [2024-10-17 10:18:01.952910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.894 [2024-10-17 10:18:01.952923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:58.894 [2024-10-17 10:18:01.952929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.894 [2024-10-17 10:18:01.952942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:58.894 [2024-10-17 10:18:01.952948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:58.894 [2024-10-17 10:18:01.952954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.894 [2024-10-17 10:18:01.952960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:58.894 [2024-10-17 10:18:01.952967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:58.894 [2024-10-17 10:18:01.952973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:58.894 [2024-10-17 10:18:01.952986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:58.894 [2024-10-17 10:18:01.952992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.952999] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:58.894 [2024-10-17 10:18:01.953006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:58.894 [2024-10-17 10:18:01.953014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.894 [2024-10-17 10:18:01.953021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.894 [2024-10-17 10:18:01.953028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:58.894 [2024-10-17 10:18:01.953035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:58.894 [2024-10-17 10:18:01.953041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:58.894 [2024-10-17 10:18:01.953066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:58.894 [2024-10-17 10:18:01.953073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:58.894 [2024-10-17 10:18:01.953080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:58.894 [2024-10-17 10:18:01.953088] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:58.894 [2024-10-17 10:18:01.953099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:58.894 [2024-10-17 10:18:01.953116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:58.894 [2024-10-17 10:18:01.953123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:58.894 [2024-10-17 10:18:01.953131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:58.894 [2024-10-17 10:18:01.953137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:58.894 [2024-10-17 10:18:01.953145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:58.894 [2024-10-17 10:18:01.953151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:58.894 [2024-10-17 10:18:01.953158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:58.894 [2024-10-17 10:18:01.953165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:58.894 [2024-10-17 10:18:01.953172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:58.894 [2024-10-17 10:18:01.953206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:58.894 [2024-10-17 10:18:01.953214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:58.894 [2024-10-17 10:18:01.953229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:58.894 [2024-10-17 10:18:01.953236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:58.894 [2024-10-17 10:18:01.953243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:58.894 [2024-10-17 10:18:01.953250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.894 [2024-10-17 10:18:01.953256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:58.894 [2024-10-17 10:18:01.953265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:18:58.894 [2024-10-17 10:18:01.953274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.894 [2024-10-17 10:18:01.978984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.894 [2024-10-17 10:18:01.979018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.894 [2024-10-17 10:18:01.979029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.649 ms 00:18:58.894 [2024-10-17 10:18:01.979036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.894 [2024-10-17 10:18:01.979164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.894 [2024-10-17 10:18:01.979174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:58.894 [2024-10-17 10:18:01.979182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:58.894 [2024-10-17 10:18:01.979193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.022226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.022265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.154 [2024-10-17 10:18:02.022277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.012 ms 00:18:59.154 [2024-10-17 10:18:02.022284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.022377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.022388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.154 [2024-10-17 10:18:02.022397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.154 [2024-10-17 10:18:02.022405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.022725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.022750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.154 [2024-10-17 10:18:02.022759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:59.154 [2024-10-17 10:18:02.022766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.022893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.022908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.154 [2024-10-17 10:18:02.022917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:59.154 [2024-10-17 10:18:02.022924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.036258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.036288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.154 [2024-10-17 10:18:02.036298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.314 ms 00:18:59.154 [2024-10-17 10:18:02.036305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.049414] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:59.154 [2024-10-17 10:18:02.049445] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.154 [2024-10-17 10:18:02.049457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.049465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.154 [2024-10-17 10:18:02.049474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.054 ms 00:18:59.154 [2024-10-17 10:18:02.049481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.073663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.073703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.154 [2024-10-17 10:18:02.073718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.115 ms 00:18:59.154 [2024-10-17 10:18:02.073727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.085589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.085618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.154 [2024-10-17 10:18:02.085628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.796 ms 00:18:59.154 [2024-10-17 10:18:02.085634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.097219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.097249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.154 [2024-10-17 10:18:02.097260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:18:59.154 [2024-10-17 10:18:02.097267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.097866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.097891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.154 [2024-10-17 10:18:02.097900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:18:59.154 [2024-10-17 10:18:02.097907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.152909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.152963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.154 [2024-10-17 10:18:02.152975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.980 ms 00:18:59.154 [2024-10-17 10:18:02.152984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.154 [2024-10-17 10:18:02.163518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:59.154 [2024-10-17 10:18:02.177387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.154 [2024-10-17 10:18:02.177425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.155 [2024-10-17 10:18:02.177437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.294 ms 00:18:59.155 [2024-10-17 10:18:02.177446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.177528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.177541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.155 [2024-10-17 10:18:02.177549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:59.155 [2024-10-17 10:18:02.177557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.177605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.177614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.155 [2024-10-17 10:18:02.177622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:59.155 [2024-10-17 10:18:02.177630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.177650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.177662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.155 [2024-10-17 10:18:02.177672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.155 [2024-10-17 10:18:02.177679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.177709] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.155 [2024-10-17 10:18:02.177718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.177726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.155 [2024-10-17 10:18:02.177734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:59.155 [2024-10-17 10:18:02.177741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.201478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.201515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.155 [2024-10-17 10:18:02.201527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.717 ms 00:18:59.155 [2024-10-17 10:18:02.201534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.201620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.155 [2024-10-17 10:18:02.201631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.155 [2024-10-17 10:18:02.201639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:59.155 [2024-10-17 10:18:02.201647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.155 [2024-10-17 10:18:02.202510] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.155 [2024-10-17 10:18:02.205382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.880 ms, result 0 00:18:59.155 [2024-10-17 10:18:02.206795] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.155 [2024-10-17 10:18:02.219483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.722  [2024-10-17T10:18:02.813Z] Copying: 4096/4096 [kB] (average 10088 kBps)[2024-10-17 10:18:02.628277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.722 [2024-10-17 10:18:02.637095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.637127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.722 [2024-10-17 10:18:02.637139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.722 [2024-10-17 10:18:02.637147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.637167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:59.722 [2024-10-17 10:18:02.639763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.639794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.722 [2024-10-17 10:18:02.639805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:18:59.722 [2024-10-17 10:18:02.639813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.642582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.642614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.722 [2024-10-17 10:18:02.642623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:18:59.722 [2024-10-17 10:18:02.642631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.646838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.646864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.722 [2024-10-17 10:18:02.646873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:18:59.722 [2024-10-17 10:18:02.646885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.653778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.653805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:59.722 [2024-10-17 10:18:02.653814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.868 ms 00:18:59.722 [2024-10-17 10:18:02.653823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.677589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.677620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.722 [2024-10-17 10:18:02.677631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.702 ms 00:18:59.722 [2024-10-17 10:18:02.677639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.692137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.692168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.722 [2024-10-17 10:18:02.692179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.465 ms 00:18:59.722 [2024-10-17 10:18:02.692190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.692321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.692330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.722 [2024-10-17 10:18:02.692338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:59.722 [2024-10-17 10:18:02.692345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.715806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.715835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:59.722 [2024-10-17 10:18:02.715845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.439 ms 00:18:59.722 [2024-10-17 10:18:02.715853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.739208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.739246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:59.722 [2024-10-17 10:18:02.739256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.324 ms 00:18:59.722 [2024-10-17 10:18:02.739264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.762510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.762541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.722 [2024-10-17 10:18:02.762551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.214 ms 00:18:59.722 [2024-10-17 10:18:02.762558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.722 [2024-10-17 10:18:02.785502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.722 [2024-10-17 10:18:02.785531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.722 [2024-10-17 10:18:02.785540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.887 ms 00:18:59.723 [2024-10-17 10:18:02.785547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.723 [2024-10-17 10:18:02.785578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.723 [2024-10-17 10:18:02.785592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.785994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.723 [2024-10-17 10:18:02.786293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.724 [2024-10-17 10:18:02.786357] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.724 [2024-10-17 10:18:02.786365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:18:59.724 [2024-10-17 10:18:02.786375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.724 [2024-10-17 10:18:02.786382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.724 [2024-10-17 10:18:02.786389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.724 [2024-10-17 10:18:02.786396] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.724 [2024-10-17 10:18:02.786402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.724 [2024-10-17 10:18:02.786410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.724 [2024-10-17 10:18:02.786417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.724 [2024-10-17 10:18:02.786423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.724 [2024-10-17 10:18:02.786429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.724 [2024-10-17 10:18:02.786436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.724 [2024-10-17 10:18:02.786443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.724 [2024-10-17 10:18:02.786451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:18:59.724 [2024-10-17 10:18:02.786461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.724 [2024-10-17 10:18:02.798592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.724 [2024-10-17 10:18:02.798620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.724 [2024-10-17 10:18:02.798630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.106 ms 00:18:59.724 [2024-10-17 10:18:02.798639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.724 [2024-10-17 10:18:02.798985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.724 [2024-10-17 10:18:02.799005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.724 [2024-10-17 10:18:02.799017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:59.724 [2024-10-17 10:18:02.799024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.833764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.833798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.982 [2024-10-17 10:18:02.833808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.833817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.833882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.833891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.982 [2024-10-17 10:18:02.833904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.833912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.833951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.833960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.982 [2024-10-17 10:18:02.833969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.833977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.833995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.834003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.982 [2024-10-17 10:18:02.834012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.834022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.910779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.910821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.982 [2024-10-17 10:18:02.910831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.910838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.982 [2024-10-17 10:18:02.974196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.982 [2024-10-17 10:18:02.974265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.982 [2024-10-17 10:18:02.974315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.982 [2024-10-17 10:18:02.974425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.982 [2024-10-17 10:18:02.974476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.982 [2024-10-17 10:18:02.974538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.982 [2024-10-17 10:18:02.974593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.982 [2024-10-17 10:18:02.974601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.982 [2024-10-17 10:18:02.974608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.982 [2024-10-17 10:18:02.974736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.630 ms, result 0 00:19:00.550 00:19:00.550 00:19:00.808 10:18:03 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74228 00:19:00.808 10:18:03 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74228 00:19:00.808 10:18:03 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74228 ']' 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.808 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:00.808 [2024-10-17 10:18:03.732838] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:19:00.808 [2024-10-17 10:18:03.732964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:19:00.808 [2024-10-17 10:18:03.878739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.067 [2024-10-17 10:18:03.975815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.632 10:18:04 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.632 10:18:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:01.632 10:18:04 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:01.899 [2024-10-17 10:18:04.769936] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.899 [2024-10-17 10:18:04.769991] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.899 [2024-10-17 10:18:04.945504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.945549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:01.899 [2024-10-17 10:18:04.945563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.899 [2024-10-17 10:18:04.945571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.948212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.948244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.899 [2024-10-17 10:18:04.948255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:19:01.899 [2024-10-17 10:18:04.948262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.948331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:01.899 [2024-10-17 10:18:04.948979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:01.899 [2024-10-17 10:18:04.949004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.949012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.899 [2024-10-17 10:18:04.949022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:19:01.899 [2024-10-17 10:18:04.949029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.950235] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:01.899 [2024-10-17 10:18:04.962817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.962856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:01.899 [2024-10-17 10:18:04.962868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.587 ms 00:19:01.899 [2024-10-17 10:18:04.962878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.962956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.962968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:01.899 [2024-10-17 10:18:04.962976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:01.899 [2024-10-17 10:18:04.962985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.968010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.968059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.899 [2024-10-17 10:18:04.968069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.979 ms 00:19:01.899 [2024-10-17 10:18:04.968077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.899 [2024-10-17 10:18:04.968169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.899 [2024-10-17 10:18:04.968180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.899 [2024-10-17 10:18:04.968188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:01.900 [2024-10-17 10:18:04.968197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.968221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.900 [2024-10-17 10:18:04.968234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.900 [2024-10-17 10:18:04.968241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:01.900 [2024-10-17 10:18:04.968249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.968272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:01.900 [2024-10-17 10:18:04.971691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.900 [2024-10-17 10:18:04.971718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.900 [2024-10-17 10:18:04.971729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:19:01.900 [2024-10-17 10:18:04.971736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.971770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.900 [2024-10-17 10:18:04.971778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.900 [2024-10-17 10:18:04.971787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:01.900 [2024-10-17 10:18:04.971794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.971815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:01.900 [2024-10-17 10:18:04.971833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:01.900 [2024-10-17 10:18:04.971872] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:01.900 [2024-10-17 10:18:04.971887] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:01.900 [2024-10-17 10:18:04.971991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:01.900 [2024-10-17 10:18:04.972002] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.900 [2024-10-17 10:18:04.972013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:01.900 [2024-10-17 10:18:04.972023] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:01.900 [2024-10-17 10:18:04.972063] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.900 [2024-10-17 10:18:04.972070] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:01.900 [2024-10-17 10:18:04.972080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:01.900 [2024-10-17 10:18:04.972087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.900 [2024-10-17 10:18:04.972095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.900 [2024-10-17 10:18:04.972102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:01.900 [2024-10-17 10:18:04.972111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.972214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.900 [2024-10-17 10:18:04.972224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.900 [2024-10-17 10:18:04.972233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:01.900 [2024-10-17 10:18:04.972250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.900 [2024-10-17 10:18:04.972352] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.900 [2024-10-17 10:18:04.972375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.900 [2024-10-17 10:18:04.972383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.900 [2024-10-17 10:18:04.972408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.900 [2024-10-17 10:18:04.972433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.900 [2024-10-17 10:18:04.972448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.900 [2024-10-17 10:18:04.972457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:01.900 [2024-10-17 10:18:04.972463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.900 [2024-10-17 10:18:04.972472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.900 [2024-10-17 10:18:04.972479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:01.900 [2024-10-17 10:18:04.972487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.900 [2024-10-17 10:18:04.972503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.900 [2024-10-17 10:18:04.972529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.900 [2024-10-17 10:18:04.972553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.900 [2024-10-17 10:18:04.972574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.900 [2024-10-17 10:18:04.972596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.900 [2024-10-17 10:18:04.972618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.900 [2024-10-17 10:18:04.972632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.900 [2024-10-17 10:18:04.972640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:01.900 [2024-10-17 10:18:04.972647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.900 [2024-10-17 10:18:04.972655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:01.900 [2024-10-17 10:18:04.972661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:01.900 [2024-10-17 10:18:04.972670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:01.900 [2024-10-17 10:18:04.972684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:01.900 [2024-10-17 10:18:04.972691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972699] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.900 [2024-10-17 10:18:04.972707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.900 [2024-10-17 10:18:04.972716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.900 [2024-10-17 10:18:04.972733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.900 [2024-10-17 10:18:04.972740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.900 [2024-10-17 10:18:04.972748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.900 [2024-10-17 10:18:04.972755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.900 [2024-10-17 10:18:04.972763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.900 [2024-10-17 10:18:04.972770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.900 [2024-10-17 10:18:04.972779] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.900 [2024-10-17 10:18:04.972788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.900 [2024-10-17 10:18:04.972801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:01.900 [2024-10-17 10:18:04.972810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:01.900 [2024-10-17 10:18:04.972818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:01.900 [2024-10-17 10:18:04.972825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:01.900 [2024-10-17 10:18:04.972834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:01.900 [2024-10-17 10:18:04.972841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:01.900 [2024-10-17 10:18:04.972850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:01.900 [2024-10-17 10:18:04.972857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:01.900 [2024-10-17 10:18:04.972865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:01.900 [2024-10-17 10:18:04.972872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:01.900 [2024-10-17 10:18:04.972881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:01.900 [2024-10-17 10:18:04.972888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:01.900 [2024-10-17 10:18:04.972896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:01.900 [2024-10-17 10:18:04.972904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:01.900 [2024-10-17 10:18:04.972912] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.900 [2024-10-17 10:18:04.972920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.901 [2024-10-17 10:18:04.972931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.901 [2024-10-17 10:18:04.972938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.901 [2024-10-17 10:18:04.972947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.901 [2024-10-17 10:18:04.972954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.901 [2024-10-17 10:18:04.972964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.901 [2024-10-17 10:18:04.972972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.901 [2024-10-17 10:18:04.972981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:19:01.901 [2024-10-17 10:18:04.972988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:04.998833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:04.998869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.187 [2024-10-17 10:18:04.998881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.757 ms 00:19:02.187 [2024-10-17 10:18:04.998889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:04.999003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:04.999014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.187 [2024-10-17 10:18:04.999024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:02.187 [2024-10-17 10:18:04.999031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.029226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.029258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.187 [2024-10-17 10:18:05.029271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.161 ms 00:19:02.187 [2024-10-17 10:18:05.029280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.029332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.029341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.187 [2024-10-17 10:18:05.029350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:02.187 [2024-10-17 10:18:05.029357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.029676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.029700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.187 [2024-10-17 10:18:05.029711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:02.187 [2024-10-17 10:18:05.029719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.029839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.029847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.187 [2024-10-17 10:18:05.029857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:02.187 [2024-10-17 10:18:05.029864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.044037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.044085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.187 [2024-10-17 10:18:05.044096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.152 ms 00:19:02.187 [2024-10-17 10:18:05.044103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.057032] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:02.187 [2024-10-17 10:18:05.057070] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.187 [2024-10-17 10:18:05.057084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.057093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.187 [2024-10-17 10:18:05.057103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.877 ms 00:19:02.187 [2024-10-17 10:18:05.057109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.081380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.081415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.187 [2024-10-17 10:18:05.081428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.202 ms 00:19:02.187 [2024-10-17 10:18:05.081437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.093286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.093317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.187 [2024-10-17 10:18:05.093330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.782 ms 00:19:02.187 [2024-10-17 10:18:05.093337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.105090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.105121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.187 [2024-10-17 10:18:05.105132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.692 ms 00:19:02.187 [2024-10-17 10:18:05.105139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.105756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.105781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.187 [2024-10-17 10:18:05.105791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:19:02.187 [2024-10-17 10:18:05.105798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.168493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.168549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.187 [2024-10-17 10:18:05.168564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.669 ms 00:19:02.187 [2024-10-17 10:18:05.168573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.178932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:02.187 [2024-10-17 10:18:05.192530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.192573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.187 [2024-10-17 10:18:05.192584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.870 ms 00:19:02.187 [2024-10-17 10:18:05.192593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.192666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.192677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.187 [2024-10-17 10:18:05.192685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:02.187 [2024-10-17 10:18:05.192694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.192742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.192753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.187 [2024-10-17 10:18:05.192761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:02.187 [2024-10-17 10:18:05.192770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.192793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.192805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.187 [2024-10-17 10:18:05.192812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.187 [2024-10-17 10:18:05.192823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.192853] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.187 [2024-10-17 10:18:05.192866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.192873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.187 [2024-10-17 10:18:05.192882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:02.187 [2024-10-17 10:18:05.192892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.216644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.216678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.187 [2024-10-17 10:18:05.216691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.729 ms 00:19:02.187 [2024-10-17 10:18:05.216699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-10-17 10:18:05.216786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-10-17 10:18:05.216797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.187 [2024-10-17 10:18:05.216807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:02.188 [2024-10-17 10:18:05.216814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.188 [2024-10-17 10:18:05.217882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.188 [2024-10-17 10:18:05.220890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.090 ms, result 0 00:19:02.188 [2024-10-17 10:18:05.222919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.188 Some configs were skipped because the RPC state that can call them passed over. 00:19:02.188 10:18:05 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:02.446 [2024-10-17 10:18:05.450539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-10-17 10:18:05.450588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:02.446 [2024-10-17 10:18:05.450600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:19:02.446 [2024-10-17 10:18:05.450610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-10-17 10:18:05.450642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.053 ms, result 0 00:19:02.446 true 00:19:02.446 10:18:05 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:02.704 [2024-10-17 10:18:05.649481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.704 [2024-10-17 10:18:05.649522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:02.704 [2024-10-17 10:18:05.649534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:19:02.704 [2024-10-17 10:18:05.649542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.704 [2024-10-17 10:18:05.649576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.731 ms, result 0 00:19:02.704 true 00:19:02.704 10:18:05 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74228 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74228 ']' 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74228 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74228 00:19:02.704 killing process with pid 74228 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74228' 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74228 00:19:02.704 10:18:05 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74228 00:19:03.641 [2024-10-17 10:18:06.367390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.367446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:03.641 [2024-10-17 10:18:06.367459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:03.641 [2024-10-17 10:18:06.367468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.367490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:03.641 [2024-10-17 10:18:06.370088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.370123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:03.641 [2024-10-17 10:18:06.370140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:19:03.641 [2024-10-17 10:18:06.370149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.370447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.370457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:03.641 [2024-10-17 10:18:06.370467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:03.641 [2024-10-17 10:18:06.370474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.374891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.374921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:03.641 [2024-10-17 10:18:06.374931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:19:03.641 [2024-10-17 10:18:06.374939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.381852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.381890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:03.641 [2024-10-17 10:18:06.381903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.875 ms 00:19:03.641 [2024-10-17 10:18:06.381910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.391999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.392032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:03.641 [2024-10-17 10:18:06.392053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.035 ms 00:19:03.641 [2024-10-17 10:18:06.392066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.399089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.399123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:03.641 [2024-10-17 10:18:06.399136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.984 ms 00:19:03.641 [2024-10-17 10:18:06.399147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.399272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.399282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:03.641 [2024-10-17 10:18:06.399293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:03.641 [2024-10-17 10:18:06.399301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.410086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.410125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:03.641 [2024-10-17 10:18:06.410138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.763 ms 00:19:03.641 [2024-10-17 10:18:06.410146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.420468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.420499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:03.641 [2024-10-17 10:18:06.420512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.285 ms 00:19:03.641 [2024-10-17 10:18:06.420520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.430316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.430345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:03.641 [2024-10-17 10:18:06.430356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.757 ms 00:19:03.641 [2024-10-17 10:18:06.430364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.440045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.641 [2024-10-17 10:18:06.440081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:03.641 [2024-10-17 10:18:06.440092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.619 ms 00:19:03.641 [2024-10-17 10:18:06.440099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.641 [2024-10-17 10:18:06.440132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:03.641 [2024-10-17 10:18:06.440146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:03.641 [2024-10-17 10:18:06.440401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:03.642 [2024-10-17 10:18:06.440986] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:03.642 [2024-10-17 10:18:06.440997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:19:03.642 [2024-10-17 10:18:06.441009] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:03.642 [2024-10-17 10:18:06.441020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:03.642 [2024-10-17 10:18:06.441029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:03.642 [2024-10-17 10:18:06.441038] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:03.642 [2024-10-17 10:18:06.441044] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:03.642 [2024-10-17 10:18:06.441063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:03.642 [2024-10-17 10:18:06.441071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:03.642 [2024-10-17 10:18:06.441079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:03.642 [2024-10-17 10:18:06.441085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:03.642 [2024-10-17 10:18:06.441094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.642 [2024-10-17 10:18:06.441101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:03.642 [2024-10-17 10:18:06.441111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:19:03.642 [2024-10-17 10:18:06.441118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.642 [2024-10-17 10:18:06.453463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.642 [2024-10-17 10:18:06.453493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:03.642 [2024-10-17 10:18:06.453506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.315 ms 00:19:03.642 [2024-10-17 10:18:06.453514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.642 [2024-10-17 10:18:06.453871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.642 [2024-10-17 10:18:06.453880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:03.642 [2024-10-17 10:18:06.453890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:03.642 [2024-10-17 10:18:06.453897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.642 [2024-10-17 10:18:06.497735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.642 [2024-10-17 10:18:06.497769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.642 [2024-10-17 10:18:06.497781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.642 [2024-10-17 10:18:06.497790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.642 [2024-10-17 10:18:06.497884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.642 [2024-10-17 10:18:06.497894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.642 [2024-10-17 10:18:06.497903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.642 [2024-10-17 10:18:06.497910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.642 [2024-10-17 10:18:06.497950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.642 [2024-10-17 10:18:06.497959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.643 [2024-10-17 10:18:06.497970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.497977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.497996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.498003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.643 [2024-10-17 10:18:06.498012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.498019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.574676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.574719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.643 [2024-10-17 10:18:06.574731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.574738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.643 [2024-10-17 10:18:06.639075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.643 [2024-10-17 10:18:06.639188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.643 [2024-10-17 10:18:06.639240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.643 [2024-10-17 10:18:06.639358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:03.643 [2024-10-17 10:18:06.639412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.643 [2024-10-17 10:18:06.639476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:03.643 [2024-10-17 10:18:06.639535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.643 [2024-10-17 10:18:06.639543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:03.643 [2024-10-17 10:18:06.639550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.643 [2024-10-17 10:18:06.639676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.265 ms, result 0 00:19:04.209 10:18:07 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:04.465 [2024-10-17 10:18:07.346872] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:19:04.465 [2024-10-17 10:18:07.346994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74281 ] 00:19:04.465 [2024-10-17 10:18:07.496021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.723 [2024-10-17 10:18:07.596902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.983 [2024-10-17 10:18:07.848154] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.983 [2024-10-17 10:18:07.848214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.983 [2024-10-17 10:18:08.007483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.007530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.983 [2024-10-17 10:18:08.007543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.983 [2024-10-17 10:18:08.007551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.010176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.010208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.983 [2024-10-17 10:18:08.010219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:19:04.983 [2024-10-17 10:18:08.010226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.010293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.983 [2024-10-17 10:18:08.011059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.983 [2024-10-17 10:18:08.011089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.011097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.983 [2024-10-17 10:18:08.011106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:19:04.983 [2024-10-17 10:18:08.011113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.012208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.983 [2024-10-17 10:18:08.024971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.025005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.983 [2024-10-17 10:18:08.025017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.764 ms 00:19:04.983 [2024-10-17 10:18:08.025029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.025118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.025129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.983 [2024-10-17 10:18:08.025138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:04.983 [2024-10-17 10:18:08.025145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.030124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.030156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.983 [2024-10-17 10:18:08.030166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.940 ms 00:19:04.983 [2024-10-17 10:18:08.030173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.030259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.030268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.983 [2024-10-17 10:18:08.030276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:04.983 [2024-10-17 10:18:08.030283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.030306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.030314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.983 [2024-10-17 10:18:08.030322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.983 [2024-10-17 10:18:08.030331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.030351] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:04.983 [2024-10-17 10:18:08.033596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.033621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.983 [2024-10-17 10:18:08.033631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:19:04.983 [2024-10-17 10:18:08.033638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.033670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.033678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.983 [2024-10-17 10:18:08.033686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:04.983 [2024-10-17 10:18:08.033693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.033711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.983 [2024-10-17 10:18:08.033728] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.983 [2024-10-17 10:18:08.033764] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.983 [2024-10-17 10:18:08.033779] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.983 [2024-10-17 10:18:08.033880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.983 [2024-10-17 10:18:08.033890] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.983 [2024-10-17 10:18:08.033901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.983 [2024-10-17 10:18:08.033910] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.983 [2024-10-17 10:18:08.033919] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.983 [2024-10-17 10:18:08.033926] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:04.983 [2024-10-17 10:18:08.033937] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.983 [2024-10-17 10:18:08.033944] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.983 [2024-10-17 10:18:08.033951] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.983 [2024-10-17 10:18:08.033958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.033965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.983 [2024-10-17 10:18:08.033973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:04.983 [2024-10-17 10:18:08.033980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.034077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.983 [2024-10-17 10:18:08.034092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.983 [2024-10-17 10:18:08.034100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:04.983 [2024-10-17 10:18:08.034117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.983 [2024-10-17 10:18:08.034216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.983 [2024-10-17 10:18:08.034226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.983 [2024-10-17 10:18:08.034234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.983 [2024-10-17 10:18:08.034242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.983 [2024-10-17 10:18:08.034256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:04.983 [2024-10-17 10:18:08.034271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.983 [2024-10-17 10:18:08.034278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.983 [2024-10-17 10:18:08.034291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.983 [2024-10-17 10:18:08.034297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:04.983 [2024-10-17 10:18:08.034306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.983 [2024-10-17 10:18:08.034319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.983 [2024-10-17 10:18:08.034326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:04.983 [2024-10-17 10:18:08.034332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.983 [2024-10-17 10:18:08.034345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:04.983 [2024-10-17 10:18:08.034351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.983 [2024-10-17 10:18:08.034364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.983 [2024-10-17 10:18:08.034377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.983 [2024-10-17 10:18:08.034383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:04.983 [2024-10-17 10:18:08.034389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.983 [2024-10-17 10:18:08.034396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.984 [2024-10-17 10:18:08.034402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.984 [2024-10-17 10:18:08.034415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.984 [2024-10-17 10:18:08.034421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.984 [2024-10-17 10:18:08.034435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.984 [2024-10-17 10:18:08.034442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.984 [2024-10-17 10:18:08.034454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.984 [2024-10-17 10:18:08.034460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:04.984 [2024-10-17 10:18:08.034466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.984 [2024-10-17 10:18:08.034472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.984 [2024-10-17 10:18:08.034479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:04.984 [2024-10-17 10:18:08.034485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.984 [2024-10-17 10:18:08.034497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:04.984 [2024-10-17 10:18:08.034503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034509] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.984 [2024-10-17 10:18:08.034517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.984 [2024-10-17 10:18:08.034525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.984 [2024-10-17 10:18:08.034532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.984 [2024-10-17 10:18:08.034539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.984 [2024-10-17 10:18:08.034545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.984 [2024-10-17 10:18:08.034552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.984 [2024-10-17 10:18:08.034558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.984 [2024-10-17 10:18:08.034564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.984 [2024-10-17 10:18:08.034571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.984 [2024-10-17 10:18:08.034579] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.984 [2024-10-17 10:18:08.034589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:04.984 [2024-10-17 10:18:08.034605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:04.984 [2024-10-17 10:18:08.034612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:04.984 [2024-10-17 10:18:08.034618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:04.984 [2024-10-17 10:18:08.034626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:04.984 [2024-10-17 10:18:08.034633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:04.984 [2024-10-17 10:18:08.034639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:04.984 [2024-10-17 10:18:08.034647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:04.984 [2024-10-17 10:18:08.034653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:04.984 [2024-10-17 10:18:08.034660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:04.984 [2024-10-17 10:18:08.034694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.984 [2024-10-17 10:18:08.034701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.984 [2024-10-17 10:18:08.034715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.984 [2024-10-17 10:18:08.034723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.984 [2024-10-17 10:18:08.034730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.984 [2024-10-17 10:18:08.034737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.984 [2024-10-17 10:18:08.034744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.984 [2024-10-17 10:18:08.034751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:19:04.984 [2024-10-17 10:18:08.034761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.984 [2024-10-17 10:18:08.060570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.984 [2024-10-17 10:18:08.060605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.984 [2024-10-17 10:18:08.060615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:19:04.984 [2024-10-17 10:18:08.060623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.984 [2024-10-17 10:18:08.060743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.984 [2024-10-17 10:18:08.060753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.984 [2024-10-17 10:18:08.060761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:04.984 [2024-10-17 10:18:08.060772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.243 [2024-10-17 10:18:08.111569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.243 [2024-10-17 10:18:08.111611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.243 [2024-10-17 10:18:08.111624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.776 ms 00:19:05.243 [2024-10-17 10:18:08.111634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.243 [2024-10-17 10:18:08.111741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.243 [2024-10-17 10:18:08.111753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.243 [2024-10-17 10:18:08.111763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:05.243 [2024-10-17 10:18:08.111771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.243 [2024-10-17 10:18:08.112110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.112132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.244 [2024-10-17 10:18:08.112141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:05.244 [2024-10-17 10:18:08.112150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.112283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.112295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.244 [2024-10-17 10:18:08.112304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:05.244 [2024-10-17 10:18:08.112312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.125574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.125605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.244 [2024-10-17 10:18:08.125616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.242 ms 00:19:05.244 [2024-10-17 10:18:08.125623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.138399] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:05.244 [2024-10-17 10:18:08.138434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:05.244 [2024-10-17 10:18:08.138446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.138455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:05.244 [2024-10-17 10:18:08.138464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:19:05.244 [2024-10-17 10:18:08.138471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.162583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.162624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:05.244 [2024-10-17 10:18:08.162635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:19:05.244 [2024-10-17 10:18:08.162642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.174434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.174466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:05.244 [2024-10-17 10:18:08.174475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.724 ms 00:19:05.244 [2024-10-17 10:18:08.174482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.185974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.186004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:05.244 [2024-10-17 10:18:08.186014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.430 ms 00:19:05.244 [2024-10-17 10:18:08.186021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.186631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.186655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:05.244 [2024-10-17 10:18:08.186665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:19:05.244 [2024-10-17 10:18:08.186672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.241357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.241412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:05.244 [2024-10-17 10:18:08.241425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.663 ms 00:19:05.244 [2024-10-17 10:18:08.241434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.251658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:05.244 [2024-10-17 10:18:08.265518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.265559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:05.244 [2024-10-17 10:18:08.265572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.991 ms 00:19:05.244 [2024-10-17 10:18:08.265579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.265658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.265671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:05.244 [2024-10-17 10:18:08.265680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:05.244 [2024-10-17 10:18:08.265688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.265732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.265740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:05.244 [2024-10-17 10:18:08.265748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:05.244 [2024-10-17 10:18:08.265755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.265779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.265790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.244 [2024-10-17 10:18:08.265800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.244 [2024-10-17 10:18:08.265807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.265835] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:05.244 [2024-10-17 10:18:08.265844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.265852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:05.244 [2024-10-17 10:18:08.265859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:05.244 [2024-10-17 10:18:08.265866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.289763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.289803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.244 [2024-10-17 10:18:08.289814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:19:05.244 [2024-10-17 10:18:08.289822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.289909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.244 [2024-10-17 10:18:08.289919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.244 [2024-10-17 10:18:08.289928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:05.244 [2024-10-17 10:18:08.289935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.244 [2024-10-17 10:18:08.290692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.244 [2024-10-17 10:18:08.293541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.931 ms, result 0 00:19:05.244 [2024-10-17 10:18:08.294634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.244 [2024-10-17 10:18:08.307388] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.620  [2024-10-17T10:18:10.647Z] Copying: 13/256 [MB] (13 MBps) [2024-10-17T10:18:11.607Z] Copying: 23724/262144 [kB] (9876 kBps) [2024-10-17T10:18:12.543Z] Copying: 33616/262144 [kB] (9892 kBps) [2024-10-17T10:18:13.478Z] Copying: 42/256 [MB] (10 MBps) [2024-10-17T10:18:14.412Z] Copying: 53/256 [MB] (10 MBps) [2024-10-17T10:18:15.784Z] Copying: 63/256 [MB] (10 MBps) [2024-10-17T10:18:16.719Z] Copying: 74/256 [MB] (11 MBps) [2024-10-17T10:18:17.654Z] Copying: 86/256 [MB] (11 MBps) [2024-10-17T10:18:18.589Z] Copying: 97/256 [MB] (11 MBps) [2024-10-17T10:18:19.523Z] Copying: 111/256 [MB] (13 MBps) [2024-10-17T10:18:20.458Z] Copying: 122/256 [MB] (11 MBps) [2024-10-17T10:18:21.394Z] Copying: 135/256 [MB] (13 MBps) [2024-10-17T10:18:22.768Z] Copying: 146/256 [MB] (10 MBps) [2024-10-17T10:18:23.702Z] Copying: 157/256 [MB] (11 MBps) [2024-10-17T10:18:24.641Z] Copying: 168/256 [MB] (10 MBps) [2024-10-17T10:18:25.585Z] Copying: 179/256 [MB] (11 MBps) [2024-10-17T10:18:26.520Z] Copying: 191/256 [MB] (11 MBps) [2024-10-17T10:18:27.453Z] Copying: 203/256 [MB] (11 MBps) [2024-10-17T10:18:28.387Z] Copying: 214/256 [MB] (11 MBps) [2024-10-17T10:18:29.763Z] Copying: 225/256 [MB] (11 MBps) [2024-10-17T10:18:30.698Z] Copying: 237/256 [MB] (11 MBps) [2024-10-17T10:18:31.264Z] Copying: 248/256 [MB] (11 MBps) [2024-10-17T10:18:31.524Z] Copying: 256/256 [MB] (average 11 MBps)[2024-10-17 10:18:31.318302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.433 [2024-10-17 10:18:31.328335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.328375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.433 [2024-10-17 10:18:31.328388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.433 [2024-10-17 10:18:31.328397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.328419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.433 [2024-10-17 10:18:31.331120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.331156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.433 [2024-10-17 10:18:31.331166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:19:28.433 [2024-10-17 10:18:31.331173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.331433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.331443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.433 [2024-10-17 10:18:31.331451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:28.433 [2024-10-17 10:18:31.331459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.335550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.335570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.433 [2024-10-17 10:18:31.335580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.077 ms 00:19:28.433 [2024-10-17 10:18:31.335592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.342752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.342780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.433 [2024-10-17 10:18:31.342789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.143 ms 00:19:28.433 [2024-10-17 10:18:31.342797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.366313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.366347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.433 [2024-10-17 10:18:31.366358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.460 ms 00:19:28.433 [2024-10-17 10:18:31.366366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.379985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.380018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.433 [2024-10-17 10:18:31.380029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.579 ms 00:19:28.433 [2024-10-17 10:18:31.380041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.380191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.380202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.433 [2024-10-17 10:18:31.380210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:28.433 [2024-10-17 10:18:31.380218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.403940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.403972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.433 [2024-10-17 10:18:31.403981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.699 ms 00:19:28.433 [2024-10-17 10:18:31.403988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.427111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.427141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.433 [2024-10-17 10:18:31.427151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.090 ms 00:19:28.433 [2024-10-17 10:18:31.427158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.450041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.450084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.433 [2024-10-17 10:18:31.450094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.850 ms 00:19:28.433 [2024-10-17 10:18:31.450100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.472206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.433 [2024-10-17 10:18:31.472254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.433 [2024-10-17 10:18:31.472264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.037 ms 00:19:28.433 [2024-10-17 10:18:31.472272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.433 [2024-10-17 10:18:31.472304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.433 [2024-10-17 10:18:31.472318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.433 [2024-10-17 10:18:31.472574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.472996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.434 [2024-10-17 10:18:31.473119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.434 [2024-10-17 10:18:31.473127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cca8bd94-b064-499b-a946-5f5c31e51e40 00:19:28.434 [2024-10-17 10:18:31.473135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.434 [2024-10-17 10:18:31.473142] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.434 [2024-10-17 10:18:31.473149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.434 [2024-10-17 10:18:31.473156] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.434 [2024-10-17 10:18:31.473163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.434 [2024-10-17 10:18:31.473171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.434 [2024-10-17 10:18:31.473178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.434 [2024-10-17 10:18:31.473184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.434 [2024-10-17 10:18:31.473191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.434 [2024-10-17 10:18:31.473198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.434 [2024-10-17 10:18:31.473205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.434 [2024-10-17 10:18:31.473214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:19:28.434 [2024-10-17 10:18:31.473223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.434 [2024-10-17 10:18:31.486230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.434 [2024-10-17 10:18:31.486294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.434 [2024-10-17 10:18:31.486306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:19:28.434 [2024-10-17 10:18:31.486314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.434 [2024-10-17 10:18:31.486672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.434 [2024-10-17 10:18:31.486690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.435 [2024-10-17 10:18:31.486702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:28.435 [2024-10-17 10:18:31.486709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.521196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.521229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.693 [2024-10-17 10:18:31.521240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.521248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.521335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.521344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.693 [2024-10-17 10:18:31.521355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.521363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.521402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.521412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.693 [2024-10-17 10:18:31.521419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.521426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.521442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.521450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.693 [2024-10-17 10:18:31.521457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.521467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.598233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.598279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.693 [2024-10-17 10:18:31.598291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.598299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.693 [2024-10-17 10:18:31.660446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.693 [2024-10-17 10:18:31.660535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.693 [2024-10-17 10:18:31.660586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.693 [2024-10-17 10:18:31.660701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.693 [2024-10-17 10:18:31.660752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.693 [2024-10-17 10:18:31.660812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.660859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.693 [2024-10-17 10:18:31.660868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.693 [2024-10-17 10:18:31.660876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.693 [2024-10-17 10:18:31.660883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.693 [2024-10-17 10:18:31.661013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.677 ms, result 0 00:19:29.259 00:19:29.259 00:19:29.259 10:18:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:29.825 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:29.825 10:18:32 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:30.084 Process with pid 74228 is not found 00:19:30.084 10:18:32 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74228 00:19:30.084 10:18:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74228 ']' 00:19:30.084 10:18:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74228 00:19:30.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74228) - No such process 00:19:30.084 10:18:32 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74228 is not found' 00:19:30.084 00:19:30.084 real 1m18.915s 00:19:30.084 user 1m50.832s 00:19:30.084 sys 0m4.873s 00:19:30.084 10:18:32 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.084 ************************************ 00:19:30.084 END TEST ftl_trim 00:19:30.084 ************************************ 00:19:30.084 10:18:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 10:18:32 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:30.084 10:18:32 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:30.084 10:18:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.084 10:18:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 ************************************ 00:19:30.084 START TEST ftl_restore 00:19:30.084 ************************************ 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:30.084 * Looking for test storage... 00:19:30.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.084 10:18:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:30.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.084 --rc genhtml_branch_coverage=1 00:19:30.084 --rc genhtml_function_coverage=1 00:19:30.084 --rc genhtml_legend=1 00:19:30.084 --rc geninfo_all_blocks=1 00:19:30.084 --rc geninfo_unexecuted_blocks=1 00:19:30.084 00:19:30.084 ' 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:30.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.084 --rc genhtml_branch_coverage=1 00:19:30.084 --rc genhtml_function_coverage=1 00:19:30.084 --rc genhtml_legend=1 00:19:30.084 --rc geninfo_all_blocks=1 00:19:30.084 --rc geninfo_unexecuted_blocks=1 00:19:30.084 00:19:30.084 ' 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:30.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.084 --rc genhtml_branch_coverage=1 00:19:30.084 --rc genhtml_function_coverage=1 00:19:30.084 --rc genhtml_legend=1 00:19:30.084 --rc geninfo_all_blocks=1 00:19:30.084 --rc geninfo_unexecuted_blocks=1 00:19:30.084 00:19:30.084 ' 00:19:30.084 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:30.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.084 --rc genhtml_branch_coverage=1 00:19:30.084 --rc genhtml_function_coverage=1 00:19:30.084 --rc genhtml_legend=1 00:19:30.084 --rc geninfo_all_blocks=1 00:19:30.084 --rc geninfo_unexecuted_blocks=1 00:19:30.084 00:19:30.084 ' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.084 10:18:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4kGURH55y6 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74618 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74618 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74618 ']' 00:19:30.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.344 10:18:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.344 10:18:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:30.344 [2024-10-17 10:18:33.250890] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:19:30.344 [2024-10-17 10:18:33.251017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74618 ] 00:19:30.344 [2024-10-17 10:18:33.398729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.602 [2024-10-17 10:18:33.496439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.169 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.169 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:31.169 10:18:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:31.427 10:18:34 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:31.427 10:18:34 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:31.427 10:18:34 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:31.427 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:31.427 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:31.428 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:31.428 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:31.428 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:31.687 { 00:19:31.687 "name": "nvme0n1", 00:19:31.687 "aliases": [ 00:19:31.687 "827db773-97e2-4fa2-a10c-0773a2816a8c" 00:19:31.687 ], 00:19:31.687 "product_name": "NVMe disk", 00:19:31.687 "block_size": 4096, 00:19:31.687 "num_blocks": 1310720, 00:19:31.687 "uuid": "827db773-97e2-4fa2-a10c-0773a2816a8c", 00:19:31.687 "numa_id": -1, 00:19:31.687 "assigned_rate_limits": { 00:19:31.687 "rw_ios_per_sec": 0, 00:19:31.687 "rw_mbytes_per_sec": 0, 00:19:31.687 "r_mbytes_per_sec": 0, 00:19:31.687 "w_mbytes_per_sec": 0 00:19:31.687 }, 00:19:31.687 "claimed": true, 00:19:31.687 "claim_type": "read_many_write_one", 00:19:31.687 "zoned": false, 00:19:31.687 "supported_io_types": { 00:19:31.687 "read": true, 00:19:31.687 "write": true, 00:19:31.687 "unmap": true, 00:19:31.687 "flush": true, 00:19:31.687 "reset": true, 00:19:31.687 "nvme_admin": true, 00:19:31.687 "nvme_io": true, 00:19:31.687 "nvme_io_md": false, 00:19:31.687 "write_zeroes": true, 00:19:31.687 "zcopy": false, 00:19:31.687 "get_zone_info": false, 00:19:31.687 "zone_management": false, 00:19:31.687 "zone_append": false, 00:19:31.687 "compare": true, 00:19:31.687 "compare_and_write": false, 00:19:31.687 "abort": true, 00:19:31.687 "seek_hole": false, 00:19:31.687 "seek_data": false, 00:19:31.687 "copy": true, 00:19:31.687 "nvme_iov_md": false 00:19:31.687 }, 00:19:31.687 "driver_specific": { 00:19:31.687 "nvme": [ 00:19:31.687 { 00:19:31.687 "pci_address": "0000:00:11.0", 00:19:31.687 "trid": { 00:19:31.687 "trtype": "PCIe", 00:19:31.687 "traddr": "0000:00:11.0" 00:19:31.687 }, 00:19:31.687 "ctrlr_data": { 00:19:31.687 "cntlid": 0, 00:19:31.687 "vendor_id": "0x1b36", 00:19:31.687 "model_number": "QEMU NVMe Ctrl", 00:19:31.687 "serial_number": "12341", 00:19:31.687 "firmware_revision": "8.0.0", 00:19:31.687 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:31.687 "oacs": { 00:19:31.687 "security": 0, 00:19:31.687 "format": 1, 00:19:31.687 "firmware": 0, 00:19:31.687 "ns_manage": 1 00:19:31.687 }, 00:19:31.687 "multi_ctrlr": false, 00:19:31.687 "ana_reporting": false 00:19:31.687 }, 00:19:31.687 "vs": { 00:19:31.687 "nvme_version": "1.4" 00:19:31.687 }, 00:19:31.687 "ns_data": { 00:19:31.687 "id": 1, 00:19:31.687 "can_share": false 00:19:31.687 } 00:19:31.687 } 00:19:31.687 ], 00:19:31.687 "mp_policy": "active_passive" 00:19:31.687 } 00:19:31.687 } 00:19:31.687 ]' 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:31.687 10:18:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:31.687 10:18:34 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:31.687 10:18:34 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:31.687 10:18:34 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:31.687 10:18:34 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:31.687 10:18:34 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:31.946 10:18:34 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7a805d68-6ac4-4d87-9fb9-5737246be389 00:19:31.946 10:18:34 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:31.946 10:18:34 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a805d68-6ac4-4d87-9fb9-5737246be389 00:19:32.204 10:18:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:32.463 10:18:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c3ef2a31-3604-49a5-990c-511d6190e987 00:19:32.463 10:18:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c3ef2a31-3604-49a5-990c-511d6190e987 00:19:32.463 10:18:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:32.722 10:18:35 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:32.722 { 00:19:32.722 "name": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:32.722 "aliases": [ 00:19:32.722 "lvs/nvme0n1p0" 00:19:32.722 ], 00:19:32.722 "product_name": "Logical Volume", 00:19:32.722 "block_size": 4096, 00:19:32.722 "num_blocks": 26476544, 00:19:32.722 "uuid": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:32.722 "assigned_rate_limits": { 00:19:32.722 "rw_ios_per_sec": 0, 00:19:32.722 "rw_mbytes_per_sec": 0, 00:19:32.722 "r_mbytes_per_sec": 0, 00:19:32.722 "w_mbytes_per_sec": 0 00:19:32.722 }, 00:19:32.722 "claimed": false, 00:19:32.722 "zoned": false, 00:19:32.722 "supported_io_types": { 00:19:32.722 "read": true, 00:19:32.722 "write": true, 00:19:32.722 "unmap": true, 00:19:32.722 "flush": false, 00:19:32.722 "reset": true, 00:19:32.722 "nvme_admin": false, 00:19:32.722 "nvme_io": false, 00:19:32.722 "nvme_io_md": false, 00:19:32.722 "write_zeroes": true, 00:19:32.722 "zcopy": false, 00:19:32.722 "get_zone_info": false, 00:19:32.722 "zone_management": false, 00:19:32.722 "zone_append": false, 00:19:32.722 "compare": false, 00:19:32.722 "compare_and_write": false, 00:19:32.722 "abort": false, 00:19:32.722 "seek_hole": true, 00:19:32.722 "seek_data": true, 00:19:32.722 "copy": false, 00:19:32.722 "nvme_iov_md": false 00:19:32.722 }, 00:19:32.722 "driver_specific": { 00:19:32.722 "lvol": { 00:19:32.722 "lvol_store_uuid": "c3ef2a31-3604-49a5-990c-511d6190e987", 00:19:32.722 "base_bdev": "nvme0n1", 00:19:32.722 "thin_provision": true, 00:19:32.722 "num_allocated_clusters": 0, 00:19:32.722 "snapshot": false, 00:19:32.722 "clone": false, 00:19:32.722 "esnap_clone": false 00:19:32.722 } 00:19:32.722 } 00:19:32.722 } 00:19:32.722 ]' 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:32.722 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:32.980 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:32.980 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:32.980 10:18:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:32.981 10:18:35 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:32.981 10:18:35 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:32.981 10:18:35 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:33.239 10:18:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:33.239 10:18:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:33.239 10:18:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.239 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.239 { 00:19:33.239 "name": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:33.239 "aliases": [ 00:19:33.239 "lvs/nvme0n1p0" 00:19:33.239 ], 00:19:33.239 "product_name": "Logical Volume", 00:19:33.239 "block_size": 4096, 00:19:33.239 "num_blocks": 26476544, 00:19:33.239 "uuid": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:33.239 "assigned_rate_limits": { 00:19:33.239 "rw_ios_per_sec": 0, 00:19:33.239 "rw_mbytes_per_sec": 0, 00:19:33.239 "r_mbytes_per_sec": 0, 00:19:33.239 "w_mbytes_per_sec": 0 00:19:33.239 }, 00:19:33.239 "claimed": false, 00:19:33.239 "zoned": false, 00:19:33.239 "supported_io_types": { 00:19:33.239 "read": true, 00:19:33.239 "write": true, 00:19:33.239 "unmap": true, 00:19:33.239 "flush": false, 00:19:33.239 "reset": true, 00:19:33.239 "nvme_admin": false, 00:19:33.239 "nvme_io": false, 00:19:33.239 "nvme_io_md": false, 00:19:33.239 "write_zeroes": true, 00:19:33.239 "zcopy": false, 00:19:33.239 "get_zone_info": false, 00:19:33.239 "zone_management": false, 00:19:33.239 "zone_append": false, 00:19:33.239 "compare": false, 00:19:33.239 "compare_and_write": false, 00:19:33.239 "abort": false, 00:19:33.239 "seek_hole": true, 00:19:33.239 "seek_data": true, 00:19:33.239 "copy": false, 00:19:33.239 "nvme_iov_md": false 00:19:33.239 }, 00:19:33.239 "driver_specific": { 00:19:33.239 "lvol": { 00:19:33.239 "lvol_store_uuid": "c3ef2a31-3604-49a5-990c-511d6190e987", 00:19:33.239 "base_bdev": "nvme0n1", 00:19:33.239 "thin_provision": true, 00:19:33.239 "num_allocated_clusters": 0, 00:19:33.240 "snapshot": false, 00:19:33.240 "clone": false, 00:19:33.240 "esnap_clone": false 00:19:33.240 } 00:19:33.240 } 00:19:33.240 } 00:19:33.240 ]' 00:19:33.240 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.240 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.240 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.498 10:18:36 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:33.498 10:18:36 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:33.498 10:18:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:33.498 10:18:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:33.498 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64e3d77c-2d6b-4396-85f4-d18470f0233c 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.757 { 00:19:33.757 "name": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:33.757 "aliases": [ 00:19:33.757 "lvs/nvme0n1p0" 00:19:33.757 ], 00:19:33.757 "product_name": "Logical Volume", 00:19:33.757 "block_size": 4096, 00:19:33.757 "num_blocks": 26476544, 00:19:33.757 "uuid": "64e3d77c-2d6b-4396-85f4-d18470f0233c", 00:19:33.757 "assigned_rate_limits": { 00:19:33.757 "rw_ios_per_sec": 0, 00:19:33.757 "rw_mbytes_per_sec": 0, 00:19:33.757 "r_mbytes_per_sec": 0, 00:19:33.757 "w_mbytes_per_sec": 0 00:19:33.757 }, 00:19:33.757 "claimed": false, 00:19:33.757 "zoned": false, 00:19:33.757 "supported_io_types": { 00:19:33.757 "read": true, 00:19:33.757 "write": true, 00:19:33.757 "unmap": true, 00:19:33.757 "flush": false, 00:19:33.757 "reset": true, 00:19:33.757 "nvme_admin": false, 00:19:33.757 "nvme_io": false, 00:19:33.757 "nvme_io_md": false, 00:19:33.757 "write_zeroes": true, 00:19:33.757 "zcopy": false, 00:19:33.757 "get_zone_info": false, 00:19:33.757 "zone_management": false, 00:19:33.757 "zone_append": false, 00:19:33.757 "compare": false, 00:19:33.757 "compare_and_write": false, 00:19:33.757 "abort": false, 00:19:33.757 "seek_hole": true, 00:19:33.757 "seek_data": true, 00:19:33.757 "copy": false, 00:19:33.757 "nvme_iov_md": false 00:19:33.757 }, 00:19:33.757 "driver_specific": { 00:19:33.757 "lvol": { 00:19:33.757 "lvol_store_uuid": "c3ef2a31-3604-49a5-990c-511d6190e987", 00:19:33.757 "base_bdev": "nvme0n1", 00:19:33.757 "thin_provision": true, 00:19:33.757 "num_allocated_clusters": 0, 00:19:33.757 "snapshot": false, 00:19:33.757 "clone": false, 00:19:33.757 "esnap_clone": false 00:19:33.757 } 00:19:33.757 } 00:19:33.757 } 00:19:33.757 ]' 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.757 10:18:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 64e3d77c-2d6b-4396-85f4-d18470f0233c --l2p_dram_limit 10' 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:33.757 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:33.757 10:18:36 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 64e3d77c-2d6b-4396-85f4-d18470f0233c --l2p_dram_limit 10 -c nvc0n1p0 00:19:34.016 [2024-10-17 10:18:36.983039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.016 [2024-10-17 10:18:36.983095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.016 [2024-10-17 10:18:36.983108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:34.016 [2024-10-17 10:18:36.983115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.016 [2024-10-17 10:18:36.983158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.016 [2024-10-17 10:18:36.983167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.016 [2024-10-17 10:18:36.983176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:34.016 [2024-10-17 10:18:36.983196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.016 [2024-10-17 10:18:36.983216] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.016 [2024-10-17 10:18:36.983840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.016 [2024-10-17 10:18:36.983861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.016 [2024-10-17 10:18:36.983868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.016 [2024-10-17 10:18:36.983876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:34.016 [2024-10-17 10:18:36.983882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.016 [2024-10-17 10:18:36.983934] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21bd6557-859f-4c45-bb05-166d03987101 00:19:34.016 [2024-10-17 10:18:36.984922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.016 [2024-10-17 10:18:36.984951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:34.016 [2024-10-17 10:18:36.984959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:34.016 [2024-10-17 10:18:36.984968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.016 [2024-10-17 10:18:36.989871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.016 [2024-10-17 10:18:36.989900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.017 [2024-10-17 10:18:36.989908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:19:34.017 [2024-10-17 10:18:36.989915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.989985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.989995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.017 [2024-10-17 10:18:36.990001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:34.017 [2024-10-17 10:18:36.990010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.990056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.990066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.017 [2024-10-17 10:18:36.990073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:34.017 [2024-10-17 10:18:36.990081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.990098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.017 [2024-10-17 10:18:36.993004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.993030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.017 [2024-10-17 10:18:36.993040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:19:34.017 [2024-10-17 10:18:36.993056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.993083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.993090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.017 [2024-10-17 10:18:36.993097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:34.017 [2024-10-17 10:18:36.993103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.993124] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:34.017 [2024-10-17 10:18:36.993233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.017 [2024-10-17 10:18:36.993244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.017 [2024-10-17 10:18:36.993253] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:34.017 [2024-10-17 10:18:36.993262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993276] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:34.017 [2024-10-17 10:18:36.993282] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.017 [2024-10-17 10:18:36.993290] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.017 [2024-10-17 10:18:36.993295] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.017 [2024-10-17 10:18:36.993303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.993310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.017 [2024-10-17 10:18:36.993318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:34.017 [2024-10-17 10:18:36.993330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.993398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.017 [2024-10-17 10:18:36.993404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.017 [2024-10-17 10:18:36.993411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:34.017 [2024-10-17 10:18:36.993417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.017 [2024-10-17 10:18:36.993493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.017 [2024-10-17 10:18:36.993500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.017 [2024-10-17 10:18:36.993509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.017 [2024-10-17 10:18:36.993527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.017 [2024-10-17 10:18:36.993545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.017 [2024-10-17 10:18:36.993557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.017 [2024-10-17 10:18:36.993563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:34.017 [2024-10-17 10:18:36.993569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.017 [2024-10-17 10:18:36.993574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.017 [2024-10-17 10:18:36.993581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:34.017 [2024-10-17 10:18:36.993586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.017 [2024-10-17 10:18:36.993600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.017 [2024-10-17 10:18:36.993619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.017 [2024-10-17 10:18:36.993635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.017 [2024-10-17 10:18:36.993653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.017 [2024-10-17 10:18:36.993670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.017 [2024-10-17 10:18:36.993689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.017 [2024-10-17 10:18:36.993700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.017 [2024-10-17 10:18:36.993706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:34.017 [2024-10-17 10:18:36.993712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.017 [2024-10-17 10:18:36.993717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.017 [2024-10-17 10:18:36.993723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:34.017 [2024-10-17 10:18:36.993728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.017 [2024-10-17 10:18:36.993739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:34.017 [2024-10-17 10:18:36.993745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993750] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.017 [2024-10-17 10:18:36.993757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.017 [2024-10-17 10:18:36.993763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.017 [2024-10-17 10:18:36.993776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.017 [2024-10-17 10:18:36.993784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.017 [2024-10-17 10:18:36.993790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.017 [2024-10-17 10:18:36.993797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.017 [2024-10-17 10:18:36.993802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.017 [2024-10-17 10:18:36.993808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.017 [2024-10-17 10:18:36.993816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.017 [2024-10-17 10:18:36.993825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.017 [2024-10-17 10:18:36.993831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:34.017 [2024-10-17 10:18:36.993838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:34.017 [2024-10-17 10:18:36.993844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:34.017 [2024-10-17 10:18:36.993850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:34.017 [2024-10-17 10:18:36.993855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:34.017 [2024-10-17 10:18:36.993862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:34.017 [2024-10-17 10:18:36.993867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:34.017 [2024-10-17 10:18:36.993874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:34.017 [2024-10-17 10:18:36.993880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:34.017 [2024-10-17 10:18:36.993888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:34.017 [2024-10-17 10:18:36.993893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:34.017 [2024-10-17 10:18:36.993899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:34.017 [2024-10-17 10:18:36.993905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:34.017 [2024-10-17 10:18:36.993911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:34.017 [2024-10-17 10:18:36.993917] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.018 [2024-10-17 10:18:36.993925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.018 [2024-10-17 10:18:36.993932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.018 [2024-10-17 10:18:36.993939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.018 [2024-10-17 10:18:36.993945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.018 [2024-10-17 10:18:36.993951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.018 [2024-10-17 10:18:36.993957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.018 [2024-10-17 10:18:36.993964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.018 [2024-10-17 10:18:36.993970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:19:34.018 [2024-10-17 10:18:36.993977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.018 [2024-10-17 10:18:36.994017] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:34.018 [2024-10-17 10:18:36.994028] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:37.301 [2024-10-17 10:18:40.213096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.213166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:37.301 [2024-10-17 10:18:40.213181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3219.067 ms 00:19:37.301 [2024-10-17 10:18:40.213191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.239006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.239063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.301 [2024-10-17 10:18:40.239076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:19:37.301 [2024-10-17 10:18:40.239085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.239208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.239221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.301 [2024-10-17 10:18:40.239229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:37.301 [2024-10-17 10:18:40.239240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.269876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.269923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.301 [2024-10-17 10:18:40.269934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.588 ms 00:19:37.301 [2024-10-17 10:18:40.269943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.269975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.269986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.301 [2024-10-17 10:18:40.269994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:37.301 [2024-10-17 10:18:40.270005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.270391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.270409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.301 [2024-10-17 10:18:40.270419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:37.301 [2024-10-17 10:18:40.270428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.270532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.270550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.301 [2024-10-17 10:18:40.270559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:37.301 [2024-10-17 10:18:40.270570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.284596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.284631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.301 [2024-10-17 10:18:40.284640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.006 ms 00:19:37.301 [2024-10-17 10:18:40.284652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.295915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:37.301 [2024-10-17 10:18:40.298656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.298688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.301 [2024-10-17 10:18:40.298701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.933 ms 00:19:37.301 [2024-10-17 10:18:40.298710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.384295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.384350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:37.301 [2024-10-17 10:18:40.384368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.554 ms 00:19:37.301 [2024-10-17 10:18:40.384376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.301 [2024-10-17 10:18:40.384561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.301 [2024-10-17 10:18:40.384572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.301 [2024-10-17 10:18:40.384585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:19:37.301 [2024-10-17 10:18:40.384595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.559 [2024-10-17 10:18:40.408218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.559 [2024-10-17 10:18:40.408257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:37.559 [2024-10-17 10:18:40.408272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.576 ms 00:19:37.559 [2024-10-17 10:18:40.408280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.430599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.430632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:37.560 [2024-10-17 10:18:40.430647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.278 ms 00:19:37.560 [2024-10-17 10:18:40.430655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.431234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.431250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.560 [2024-10-17 10:18:40.431261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:19:37.560 [2024-10-17 10:18:40.431269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.500379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.500423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:37.560 [2024-10-17 10:18:40.500442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.075 ms 00:19:37.560 [2024-10-17 10:18:40.500451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.524624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.524658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:37.560 [2024-10-17 10:18:40.524673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.098 ms 00:19:37.560 [2024-10-17 10:18:40.524680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.547716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.547750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:37.560 [2024-10-17 10:18:40.547762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.998 ms 00:19:37.560 [2024-10-17 10:18:40.547769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.570457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.570490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.560 [2024-10-17 10:18:40.570503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.651 ms 00:19:37.560 [2024-10-17 10:18:40.570511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.570549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.570559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.560 [2024-10-17 10:18:40.570571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:37.560 [2024-10-17 10:18:40.570578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.570655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.560 [2024-10-17 10:18:40.570664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.560 [2024-10-17 10:18:40.570674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:37.560 [2024-10-17 10:18:40.570681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.560 [2024-10-17 10:18:40.571510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3588.053 ms, result 0 00:19:37.560 { 00:19:37.560 "name": "ftl0", 00:19:37.560 "uuid": "21bd6557-859f-4c45-bb05-166d03987101" 00:19:37.560 } 00:19:37.560 10:18:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:37.560 10:18:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:37.818 10:18:40 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:37.818 10:18:40 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:38.077 [2024-10-17 10:18:40.983181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:40.983228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.077 [2024-10-17 10:18:40.983240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.077 [2024-10-17 10:18:40.983256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:40.983279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.077 [2024-10-17 10:18:40.985886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:40.985915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.077 [2024-10-17 10:18:40.985929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:19:38.077 [2024-10-17 10:18:40.985938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:40.986211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:40.986222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.077 [2024-10-17 10:18:40.986232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:38.077 [2024-10-17 10:18:40.986239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:40.989474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:40.989493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.077 [2024-10-17 10:18:40.989504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:19:38.077 [2024-10-17 10:18:40.989513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:40.995704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:40.995732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.077 [2024-10-17 10:18:40.995744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:19:38.077 [2024-10-17 10:18:40.995751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.020042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.020083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.077 [2024-10-17 10:18:41.020096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.225 ms 00:19:38.077 [2024-10-17 10:18:41.020104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.035600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.035635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.077 [2024-10-17 10:18:41.035651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.455 ms 00:19:38.077 [2024-10-17 10:18:41.035659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.035804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.035815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.077 [2024-10-17 10:18:41.035825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:38.077 [2024-10-17 10:18:41.035832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.059308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.059340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.077 [2024-10-17 10:18:41.059351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.458 ms 00:19:38.077 [2024-10-17 10:18:41.059359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.082762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.082795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.077 [2024-10-17 10:18:41.082807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.368 ms 00:19:38.077 [2024-10-17 10:18:41.082814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.105268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.105300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.077 [2024-10-17 10:18:41.105311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.419 ms 00:19:38.077 [2024-10-17 10:18:41.105318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.128411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.077 [2024-10-17 10:18:41.128443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.077 [2024-10-17 10:18:41.128455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.025 ms 00:19:38.077 [2024-10-17 10:18:41.128462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.077 [2024-10-17 10:18:41.128495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.077 [2024-10-17 10:18:41.128508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.077 [2024-10-17 10:18:41.128519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.077 [2024-10-17 10:18:41.128527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.077 [2024-10-17 10:18:41.128537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.128994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.078 [2024-10-17 10:18:41.129342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.079 [2024-10-17 10:18:41.129358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.079 [2024-10-17 10:18:41.129367] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21bd6557-859f-4c45-bb05-166d03987101 00:19:38.079 [2024-10-17 10:18:41.129375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.079 [2024-10-17 10:18:41.129385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.079 [2024-10-17 10:18:41.129394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.079 [2024-10-17 10:18:41.129403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.079 [2024-10-17 10:18:41.129410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.079 [2024-10-17 10:18:41.129421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.079 [2024-10-17 10:18:41.129428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.079 [2024-10-17 10:18:41.129436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.079 [2024-10-17 10:18:41.129442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.079 [2024-10-17 10:18:41.129451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.079 [2024-10-17 10:18:41.129458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.079 [2024-10-17 10:18:41.129467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:19:38.079 [2024-10-17 10:18:41.129474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.079 [2024-10-17 10:18:41.141717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.079 [2024-10-17 10:18:41.141747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.079 [2024-10-17 10:18:41.141759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:19:38.079 [2024-10-17 10:18:41.141767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.079 [2024-10-17 10:18:41.142131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.079 [2024-10-17 10:18:41.142141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.079 [2024-10-17 10:18:41.142151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:38.079 [2024-10-17 10:18:41.142158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.183669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.183704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.338 [2024-10-17 10:18:41.183716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.183724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.183781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.183789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.338 [2024-10-17 10:18:41.183799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.183807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.183870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.183881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.338 [2024-10-17 10:18:41.183890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.183897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.183917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.183924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.338 [2024-10-17 10:18:41.183933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.183940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.261461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.261498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.338 [2024-10-17 10:18:41.261509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.261517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.324581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.324623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.338 [2024-10-17 10:18:41.324636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.324643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.324730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.324742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.338 [2024-10-17 10:18:41.324752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.324760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.324806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.324815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.338 [2024-10-17 10:18:41.324825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.324832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.324921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.324931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.338 [2024-10-17 10:18:41.324942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.324949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.324979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.324988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.338 [2024-10-17 10:18:41.324998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.325005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.325042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.325063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.338 [2024-10-17 10:18:41.325072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.325081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.325125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.338 [2024-10-17 10:18:41.325135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.338 [2024-10-17 10:18:41.325145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.338 [2024-10-17 10:18:41.325153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.338 [2024-10-17 10:18:41.325275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.061 ms, result 0 00:19:38.338 true 00:19:38.338 10:18:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74618 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74618 ']' 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74618 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74618 00:19:38.338 killing process with pid 74618 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74618' 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74618 00:19:38.338 10:18:41 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74618 00:19:44.899 10:18:47 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:49.085 262144+0 records in 00:19:49.085 262144+0 records out 00:19:49.085 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28779 s, 250 MB/s 00:19:49.085 10:18:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:51.614 10:18:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.614 [2024-10-17 10:18:54.131213] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:19:51.614 [2024-10-17 10:18:54.131333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74854 ] 00:19:51.614 [2024-10-17 10:18:54.280772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.614 [2024-10-17 10:18:54.376671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.614 [2024-10-17 10:18:54.627199] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.614 [2024-10-17 10:18:54.627260] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.874 [2024-10-17 10:18:54.785945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.785989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.874 [2024-10-17 10:18:54.786001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.874 [2024-10-17 10:18:54.786014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.786075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.786086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.874 [2024-10-17 10:18:54.786094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:51.874 [2024-10-17 10:18:54.786103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.786126] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.874 [2024-10-17 10:18:54.786820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.874 [2024-10-17 10:18:54.786841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.786851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.874 [2024-10-17 10:18:54.786859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:19:51.874 [2024-10-17 10:18:54.786866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.787908] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:51.874 [2024-10-17 10:18:54.800780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.800813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:51.874 [2024-10-17 10:18:54.800824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.873 ms 00:19:51.874 [2024-10-17 10:18:54.800832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.800883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.800893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:51.874 [2024-10-17 10:18:54.800903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:51.874 [2024-10-17 10:18:54.800911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.805731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.805763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.874 [2024-10-17 10:18:54.805772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:19:51.874 [2024-10-17 10:18:54.805779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.805847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.805855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.874 [2024-10-17 10:18:54.805864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:51.874 [2024-10-17 10:18:54.805870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.805915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.805925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:51.874 [2024-10-17 10:18:54.805932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.874 [2024-10-17 10:18:54.805939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.805958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.874 [2024-10-17 10:18:54.809138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.809164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.874 [2024-10-17 10:18:54.809174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:19:51.874 [2024-10-17 10:18:54.809182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.809212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.809221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:51.874 [2024-10-17 10:18:54.809230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:51.874 [2024-10-17 10:18:54.809237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.809257] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:51.874 [2024-10-17 10:18:54.809276] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:51.874 [2024-10-17 10:18:54.809312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:51.874 [2024-10-17 10:18:54.809330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:51.874 [2024-10-17 10:18:54.809434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:51.874 [2024-10-17 10:18:54.809445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:51.874 [2024-10-17 10:18:54.809457] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:51.874 [2024-10-17 10:18:54.809468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809477] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:51.874 [2024-10-17 10:18:54.809494] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:51.874 [2024-10-17 10:18:54.809503] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:51.874 [2024-10-17 10:18:54.809511] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:51.874 [2024-10-17 10:18:54.809519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.809530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:51.874 [2024-10-17 10:18:54.809539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:51.874 [2024-10-17 10:18:54.809546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.809630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.874 [2024-10-17 10:18:54.809639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:51.874 [2024-10-17 10:18:54.809648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:51.874 [2024-10-17 10:18:54.809656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.874 [2024-10-17 10:18:54.809756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:51.874 [2024-10-17 10:18:54.809773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:51.874 [2024-10-17 10:18:54.809784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:51.874 [2024-10-17 10:18:54.809809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:51.874 [2024-10-17 10:18:54.809832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.874 [2024-10-17 10:18:54.809847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:51.874 [2024-10-17 10:18:54.809854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:51.874 [2024-10-17 10:18:54.809862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.874 [2024-10-17 10:18:54.809869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:51.874 [2024-10-17 10:18:54.809879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:51.874 [2024-10-17 10:18:54.809892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:51.874 [2024-10-17 10:18:54.809907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:51.874 [2024-10-17 10:18:54.809929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:51.874 [2024-10-17 10:18:54.809952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:51.874 [2024-10-17 10:18:54.809974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:51.874 [2024-10-17 10:18:54.809982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.874 [2024-10-17 10:18:54.809989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:51.874 [2024-10-17 10:18:54.809997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:51.874 [2024-10-17 10:18:54.810004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.874 [2024-10-17 10:18:54.810013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:51.874 [2024-10-17 10:18:54.810021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:51.874 [2024-10-17 10:18:54.810029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.874 [2024-10-17 10:18:54.810037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:51.874 [2024-10-17 10:18:54.810044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:51.874 [2024-10-17 10:18:54.810063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.874 [2024-10-17 10:18:54.810071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:51.874 [2024-10-17 10:18:54.810079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:51.874 [2024-10-17 10:18:54.810087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.874 [2024-10-17 10:18:54.810094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:51.875 [2024-10-17 10:18:54.810101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:51.875 [2024-10-17 10:18:54.810109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.875 [2024-10-17 10:18:54.810116] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:51.875 [2024-10-17 10:18:54.810133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:51.875 [2024-10-17 10:18:54.810142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.875 [2024-10-17 10:18:54.810150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.875 [2024-10-17 10:18:54.810159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:51.875 [2024-10-17 10:18:54.810167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:51.875 [2024-10-17 10:18:54.810175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:51.875 [2024-10-17 10:18:54.810183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:51.875 [2024-10-17 10:18:54.810191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:51.875 [2024-10-17 10:18:54.810199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:51.875 [2024-10-17 10:18:54.810208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:51.875 [2024-10-17 10:18:54.810217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:51.875 [2024-10-17 10:18:54.810235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:51.875 [2024-10-17 10:18:54.810243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:51.875 [2024-10-17 10:18:54.810251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:51.875 [2024-10-17 10:18:54.810259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:51.875 [2024-10-17 10:18:54.810268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:51.875 [2024-10-17 10:18:54.810275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:51.875 [2024-10-17 10:18:54.810282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:51.875 [2024-10-17 10:18:54.810288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:51.875 [2024-10-17 10:18:54.810295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:51.875 [2024-10-17 10:18:54.810331] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:51.875 [2024-10-17 10:18:54.810339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:51.875 [2024-10-17 10:18:54.810357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:51.875 [2024-10-17 10:18:54.810363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:51.875 [2024-10-17 10:18:54.810371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:51.875 [2024-10-17 10:18:54.810378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.810385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:51.875 [2024-10-17 10:18:54.810392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:19:51.875 [2024-10-17 10:18:54.810400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.835993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.836029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.875 [2024-10-17 10:18:54.836039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.542 ms 00:19:51.875 [2024-10-17 10:18:54.836056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.836137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.836149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.875 [2024-10-17 10:18:54.836157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:51.875 [2024-10-17 10:18:54.836163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.881763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.881803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.875 [2024-10-17 10:18:54.881815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.555 ms 00:19:51.875 [2024-10-17 10:18:54.881824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.881860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.881869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.875 [2024-10-17 10:18:54.881877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.875 [2024-10-17 10:18:54.881884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.882273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.882298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.875 [2024-10-17 10:18:54.882307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:51.875 [2024-10-17 10:18:54.882315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.882437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.882451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.875 [2024-10-17 10:18:54.882460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:51.875 [2024-10-17 10:18:54.882466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.895433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.895465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.875 [2024-10-17 10:18:54.895474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:19:51.875 [2024-10-17 10:18:54.895483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.908193] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:51.875 [2024-10-17 10:18:54.908232] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:51.875 [2024-10-17 10:18:54.908242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.908250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:51.875 [2024-10-17 10:18:54.908258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.675 ms 00:19:51.875 [2024-10-17 10:18:54.908265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.932337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.932368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:51.875 [2024-10-17 10:18:54.932380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.039 ms 00:19:51.875 [2024-10-17 10:18:54.932392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.944459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.944497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:51.875 [2024-10-17 10:18:54.944506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.032 ms 00:19:51.875 [2024-10-17 10:18:54.944512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.955926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.955957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:51.875 [2024-10-17 10:18:54.955966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.384 ms 00:19:51.875 [2024-10-17 10:18:54.955973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.875 [2024-10-17 10:18:54.956576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.875 [2024-10-17 10:18:54.956599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.875 [2024-10-17 10:18:54.956608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:19:51.875 [2024-10-17 10:18:54.956615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.011479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.011535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.134 [2024-10-17 10:18:55.011549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.847 ms 00:19:52.134 [2024-10-17 10:18:55.011558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.021867] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:52.134 [2024-10-17 10:18:55.024070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.024097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.134 [2024-10-17 10:18:55.024109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.467 ms 00:19:52.134 [2024-10-17 10:18:55.024117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.024205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.024216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.134 [2024-10-17 10:18:55.024225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.134 [2024-10-17 10:18:55.024234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.024299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.024313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.134 [2024-10-17 10:18:55.024322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:52.134 [2024-10-17 10:18:55.024331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.024351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.024360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.134 [2024-10-17 10:18:55.024369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.134 [2024-10-17 10:18:55.024377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.024407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.134 [2024-10-17 10:18:55.024418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.024427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.134 [2024-10-17 10:18:55.024437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:52.134 [2024-10-17 10:18:55.024445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.047914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.047949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.134 [2024-10-17 10:18:55.047960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.451 ms 00:19:52.134 [2024-10-17 10:18:55.047969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.048037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.134 [2024-10-17 10:18:55.048058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.134 [2024-10-17 10:18:55.048067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:52.134 [2024-10-17 10:18:55.048074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.134 [2024-10-17 10:18:55.049592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.238 ms, result 0 00:19:53.087  [2024-10-17T10:18:57.112Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-17T10:18:58.488Z] Copying: 23116/1048576 [kB] (10228 kBps) [2024-10-17T10:18:59.423Z] Copying: 33200/1048576 [kB] (10084 kBps) [2024-10-17T10:19:00.358Z] Copying: 43176/1048576 [kB] (9976 kBps) [2024-10-17T10:19:01.294Z] Copying: 66/1024 [MB] (24 MBps) [2024-10-17T10:19:02.255Z] Copying: 81/1024 [MB] (14 MBps) [2024-10-17T10:19:03.190Z] Copying: 101/1024 [MB] (20 MBps) [2024-10-17T10:19:04.125Z] Copying: 113/1024 [MB] (12 MBps) [2024-10-17T10:19:05.073Z] Copying: 131/1024 [MB] (17 MBps) [2024-10-17T10:19:06.447Z] Copying: 147/1024 [MB] (15 MBps) [2024-10-17T10:19:07.380Z] Copying: 158/1024 [MB] (10 MBps) [2024-10-17T10:19:08.311Z] Copying: 178/1024 [MB] (20 MBps) [2024-10-17T10:19:09.258Z] Copying: 196/1024 [MB] (17 MBps) [2024-10-17T10:19:10.212Z] Copying: 213/1024 [MB] (16 MBps) [2024-10-17T10:19:11.147Z] Copying: 228/1024 [MB] (15 MBps) [2024-10-17T10:19:12.081Z] Copying: 242/1024 [MB] (14 MBps) [2024-10-17T10:19:13.453Z] Copying: 260/1024 [MB] (17 MBps) [2024-10-17T10:19:14.387Z] Copying: 275/1024 [MB] (14 MBps) [2024-10-17T10:19:15.321Z] Copying: 292/1024 [MB] (17 MBps) [2024-10-17T10:19:16.268Z] Copying: 311/1024 [MB] (19 MBps) [2024-10-17T10:19:17.202Z] Copying: 325/1024 [MB] (14 MBps) [2024-10-17T10:19:18.135Z] Copying: 338/1024 [MB] (12 MBps) [2024-10-17T10:19:19.070Z] Copying: 357/1024 [MB] (19 MBps) [2024-10-17T10:19:20.443Z] Copying: 373/1024 [MB] (15 MBps) [2024-10-17T10:19:21.375Z] Copying: 395/1024 [MB] (21 MBps) [2024-10-17T10:19:22.328Z] Copying: 419/1024 [MB] (24 MBps) [2024-10-17T10:19:23.260Z] Copying: 440/1024 [MB] (21 MBps) [2024-10-17T10:19:24.193Z] Copying: 463/1024 [MB] (23 MBps) [2024-10-17T10:19:25.126Z] Copying: 480/1024 [MB] (16 MBps) [2024-10-17T10:19:26.498Z] Copying: 499/1024 [MB] (19 MBps) [2024-10-17T10:19:27.063Z] Copying: 512/1024 [MB] (13 MBps) [2024-10-17T10:19:28.435Z] Copying: 532/1024 [MB] (20 MBps) [2024-10-17T10:19:29.369Z] Copying: 548/1024 [MB] (15 MBps) [2024-10-17T10:19:30.305Z] Copying: 577/1024 [MB] (28 MBps) [2024-10-17T10:19:31.240Z] Copying: 601/1024 [MB] (24 MBps) [2024-10-17T10:19:32.173Z] Copying: 616/1024 [MB] (14 MBps) [2024-10-17T10:19:33.106Z] Copying: 632/1024 [MB] (16 MBps) [2024-10-17T10:19:34.478Z] Copying: 644/1024 [MB] (12 MBps) [2024-10-17T10:19:35.411Z] Copying: 657/1024 [MB] (12 MBps) [2024-10-17T10:19:36.346Z] Copying: 671/1024 [MB] (14 MBps) [2024-10-17T10:19:37.280Z] Copying: 682/1024 [MB] (11 MBps) [2024-10-17T10:19:38.214Z] Copying: 694/1024 [MB] (11 MBps) [2024-10-17T10:19:39.147Z] Copying: 709/1024 [MB] (15 MBps) [2024-10-17T10:19:40.080Z] Copying: 729/1024 [MB] (19 MBps) [2024-10-17T10:19:41.453Z] Copying: 747/1024 [MB] (17 MBps) [2024-10-17T10:19:42.386Z] Copying: 760/1024 [MB] (13 MBps) [2024-10-17T10:19:43.319Z] Copying: 777/1024 [MB] (16 MBps) [2024-10-17T10:19:44.252Z] Copying: 792/1024 [MB] (15 MBps) [2024-10-17T10:19:45.184Z] Copying: 809/1024 [MB] (16 MBps) [2024-10-17T10:19:46.117Z] Copying: 824/1024 [MB] (14 MBps) [2024-10-17T10:19:47.491Z] Copying: 835/1024 [MB] (11 MBps) [2024-10-17T10:19:48.423Z] Copying: 853/1024 [MB] (17 MBps) [2024-10-17T10:19:49.407Z] Copying: 874/1024 [MB] (20 MBps) [2024-10-17T10:19:50.342Z] Copying: 888/1024 [MB] (14 MBps) [2024-10-17T10:19:51.276Z] Copying: 908/1024 [MB] (19 MBps) [2024-10-17T10:19:52.208Z] Copying: 923/1024 [MB] (15 MBps) [2024-10-17T10:19:53.140Z] Copying: 946/1024 [MB] (22 MBps) [2024-10-17T10:19:54.072Z] Copying: 972/1024 [MB] (26 MBps) [2024-10-17T10:19:55.445Z] Copying: 991/1024 [MB] (18 MBps) [2024-10-17T10:19:55.702Z] Copying: 1008/1024 [MB] (17 MBps) [2024-10-17T10:19:55.702Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-17 10:19:55.678401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.611 [2024-10-17 10:19:55.678443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.611 [2024-10-17 10:19:55.678456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.611 [2024-10-17 10:19:55.678464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.611 [2024-10-17 10:19:55.678484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.611 [2024-10-17 10:19:55.681081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.611 [2024-10-17 10:19:55.681118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.611 [2024-10-17 10:19:55.681129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:20:52.611 [2024-10-17 10:19:55.681137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.611 [2024-10-17 10:19:55.683874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.611 [2024-10-17 10:19:55.683915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.611 [2024-10-17 10:19:55.683925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:20:52.611 [2024-10-17 10:19:55.683932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.611 [2024-10-17 10:19:55.700634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.611 [2024-10-17 10:19:55.700669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.611 [2024-10-17 10:19:55.700680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.687 ms 00:20:52.611 [2024-10-17 10:19:55.700687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.706829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.706860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.870 [2024-10-17 10:19:55.706874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:20:52.870 [2024-10-17 10:19:55.706882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.731357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.731394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.870 [2024-10-17 10:19:55.731404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.428 ms 00:20:52.870 [2024-10-17 10:19:55.731411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.745937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.745970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.870 [2024-10-17 10:19:55.745982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.494 ms 00:20:52.870 [2024-10-17 10:19:55.745990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.746123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.746150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.870 [2024-10-17 10:19:55.746160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:52.870 [2024-10-17 10:19:55.746171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.769701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.769732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.870 [2024-10-17 10:19:55.769742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.516 ms 00:20:52.870 [2024-10-17 10:19:55.769749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.792868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.792900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.870 [2024-10-17 10:19:55.792918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.087 ms 00:20:52.870 [2024-10-17 10:19:55.792925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.815940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.815980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.870 [2024-10-17 10:19:55.815990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:20:52.870 [2024-10-17 10:19:55.815997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.838685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.870 [2024-10-17 10:19:55.838717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.870 [2024-10-17 10:19:55.838728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.628 ms 00:20:52.870 [2024-10-17 10:19:55.838736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.870 [2024-10-17 10:19:55.838768] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.870 [2024-10-17 10:19:55.838782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.838996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.870 [2024-10-17 10:19:55.839092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.871 [2024-10-17 10:19:55.839552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.871 [2024-10-17 10:19:55.839559] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21bd6557-859f-4c45-bb05-166d03987101 00:20:52.871 [2024-10-17 10:19:55.839572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.871 [2024-10-17 10:19:55.839579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.871 [2024-10-17 10:19:55.839588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.871 [2024-10-17 10:19:55.839596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.871 [2024-10-17 10:19:55.839603] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.871 [2024-10-17 10:19:55.839611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.871 [2024-10-17 10:19:55.839618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.871 [2024-10-17 10:19:55.839631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.871 [2024-10-17 10:19:55.839637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.871 [2024-10-17 10:19:55.839644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.871 [2024-10-17 10:19:55.839651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.871 [2024-10-17 10:19:55.839659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:20:52.871 [2024-10-17 10:19:55.839666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.851930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.871 [2024-10-17 10:19:55.851962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.871 [2024-10-17 10:19:55.851973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.249 ms 00:20:52.871 [2024-10-17 10:19:55.851980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.852349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.871 [2024-10-17 10:19:55.852366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.871 [2024-10-17 10:19:55.852374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:20:52.871 [2024-10-17 10:19:55.852381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.884904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.871 [2024-10-17 10:19:55.884940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.871 [2024-10-17 10:19:55.884949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.871 [2024-10-17 10:19:55.884957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.885011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.871 [2024-10-17 10:19:55.885019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.871 [2024-10-17 10:19:55.885026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.871 [2024-10-17 10:19:55.885034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.885098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.871 [2024-10-17 10:19:55.885112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.871 [2024-10-17 10:19:55.885120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.871 [2024-10-17 10:19:55.885127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.871 [2024-10-17 10:19:55.885142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.871 [2024-10-17 10:19:55.885150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.871 [2024-10-17 10:19:55.885157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.871 [2024-10-17 10:19:55.885164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:55.961154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:55.961199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:53.129 [2024-10-17 10:19:55.961210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:55.961218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:53.129 [2024-10-17 10:19:56.024188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.129 [2024-10-17 10:19:56.024283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.129 [2024-10-17 10:19:56.024337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.129 [2024-10-17 10:19:56.024448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:53.129 [2024-10-17 10:19:56.024499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.129 [2024-10-17 10:19:56.024554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.129 [2024-10-17 10:19:56.024611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.129 [2024-10-17 10:19:56.024619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.129 [2024-10-17 10:19:56.024626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.129 [2024-10-17 10:19:56.024735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.305 ms, result 0 00:20:54.088 00:20:54.088 00:20:54.088 10:19:57 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:54.088 [2024-10-17 10:19:57.169595] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:20:54.088 [2024-10-17 10:19:57.169718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75503 ] 00:20:54.361 [2024-10-17 10:19:57.319579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.361 [2024-10-17 10:19:57.416042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.618 [2024-10-17 10:19:57.669144] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.618 [2024-10-17 10:19:57.669200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.878 [2024-10-17 10:19:57.827031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.827086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.878 [2024-10-17 10:19:57.827100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.878 [2024-10-17 10:19:57.827112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.827154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.827164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.878 [2024-10-17 10:19:57.827172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:54.878 [2024-10-17 10:19:57.827181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.827199] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.878 [2024-10-17 10:19:57.827829] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.878 [2024-10-17 10:19:57.827848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.827858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.878 [2024-10-17 10:19:57.827866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:20:54.878 [2024-10-17 10:19:57.827873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.828898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.878 [2024-10-17 10:19:57.841523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.841558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.878 [2024-10-17 10:19:57.841570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:20:54.878 [2024-10-17 10:19:57.841580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.841630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.841639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.878 [2024-10-17 10:19:57.841649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:54.878 [2024-10-17 10:19:57.841656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.846633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.846663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.878 [2024-10-17 10:19:57.846672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:20:54.878 [2024-10-17 10:19:57.846679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.846747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.846756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.878 [2024-10-17 10:19:57.846764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:54.878 [2024-10-17 10:19:57.846771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.846814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.846833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.878 [2024-10-17 10:19:57.846841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.878 [2024-10-17 10:19:57.846848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.846868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.878 [2024-10-17 10:19:57.850118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.850160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.878 [2024-10-17 10:19:57.850170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:20:54.878 [2024-10-17 10:19:57.850178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.850206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.850214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.878 [2024-10-17 10:19:57.850222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.878 [2024-10-17 10:19:57.850229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.850247] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.878 [2024-10-17 10:19:57.850264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.878 [2024-10-17 10:19:57.850298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.878 [2024-10-17 10:19:57.850320] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.878 [2024-10-17 10:19:57.850423] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.878 [2024-10-17 10:19:57.850438] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.878 [2024-10-17 10:19:57.850449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.878 [2024-10-17 10:19:57.850459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850467] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850475] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.878 [2024-10-17 10:19:57.850482] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.878 [2024-10-17 10:19:57.850489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.878 [2024-10-17 10:19:57.850495] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.878 [2024-10-17 10:19:57.850502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.850512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.878 [2024-10-17 10:19:57.850519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:20:54.878 [2024-10-17 10:19:57.850526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.850609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.878 [2024-10-17 10:19:57.850622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.878 [2024-10-17 10:19:57.850630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.878 [2024-10-17 10:19:57.850641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.878 [2024-10-17 10:19:57.850748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.878 [2024-10-17 10:19:57.850762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.878 [2024-10-17 10:19:57.850773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.878 [2024-10-17 10:19:57.850795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.878 [2024-10-17 10:19:57.850816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.878 [2024-10-17 10:19:57.850830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.878 [2024-10-17 10:19:57.850840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.878 [2024-10-17 10:19:57.850848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.878 [2024-10-17 10:19:57.850855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.878 [2024-10-17 10:19:57.850862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.878 [2024-10-17 10:19:57.850874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.878 [2024-10-17 10:19:57.850887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.878 [2024-10-17 10:19:57.850906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.878 [2024-10-17 10:19:57.850927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.878 [2024-10-17 10:19:57.850946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.878 [2024-10-17 10:19:57.850952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.878 [2024-10-17 10:19:57.850959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.878 [2024-10-17 10:19:57.850965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.879 [2024-10-17 10:19:57.850972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.879 [2024-10-17 10:19:57.850978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.879 [2024-10-17 10:19:57.850985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.879 [2024-10-17 10:19:57.850992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.879 [2024-10-17 10:19:57.850999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.879 [2024-10-17 10:19:57.851005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.879 [2024-10-17 10:19:57.851011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.879 [2024-10-17 10:19:57.851018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.879 [2024-10-17 10:19:57.851025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.879 [2024-10-17 10:19:57.851031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.879 [2024-10-17 10:19:57.851038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.879 [2024-10-17 10:19:57.851044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.879 [2024-10-17 10:19:57.851064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.879 [2024-10-17 10:19:57.851071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.879 [2024-10-17 10:19:57.851080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.879 [2024-10-17 10:19:57.851088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.879 [2024-10-17 10:19:57.851096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.879 [2024-10-17 10:19:57.851104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.879 [2024-10-17 10:19:57.851111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.879 [2024-10-17 10:19:57.851118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.879 [2024-10-17 10:19:57.851125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.879 [2024-10-17 10:19:57.851132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.879 [2024-10-17 10:19:57.851139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.879 [2024-10-17 10:19:57.851147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.879 [2024-10-17 10:19:57.851157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.879 [2024-10-17 10:19:57.851173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.879 [2024-10-17 10:19:57.851179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.879 [2024-10-17 10:19:57.851186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.879 [2024-10-17 10:19:57.851194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.879 [2024-10-17 10:19:57.851204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.879 [2024-10-17 10:19:57.851212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.879 [2024-10-17 10:19:57.851219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.879 [2024-10-17 10:19:57.851226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.879 [2024-10-17 10:19:57.851233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.879 [2024-10-17 10:19:57.851267] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.879 [2024-10-17 10:19:57.851275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.879 [2024-10-17 10:19:57.851292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.879 [2024-10-17 10:19:57.851299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.879 [2024-10-17 10:19:57.851307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.879 [2024-10-17 10:19:57.851315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.851327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.879 [2024-10-17 10:19:57.851335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:20:54.879 [2024-10-17 10:19:57.851342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.877170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.877205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.879 [2024-10-17 10:19:57.877215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.772 ms 00:20:54.879 [2024-10-17 10:19:57.877222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.877298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.877309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.879 [2024-10-17 10:19:57.877317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:54.879 [2024-10-17 10:19:57.877324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.918177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.918219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.879 [2024-10-17 10:19:57.918232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.805 ms 00:20:54.879 [2024-10-17 10:19:57.918239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.918276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.918286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.879 [2024-10-17 10:19:57.918294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.879 [2024-10-17 10:19:57.918301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.918654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.918670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.879 [2024-10-17 10:19:57.918679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:54.879 [2024-10-17 10:19:57.918686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.918809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.918828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.879 [2024-10-17 10:19:57.918836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:54.879 [2024-10-17 10:19:57.918843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.931891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.931924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.879 [2024-10-17 10:19:57.931934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.029 ms 00:20:54.879 [2024-10-17 10:19:57.931941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.879 [2024-10-17 10:19:57.944859] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:54.879 [2024-10-17 10:19:57.944894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.879 [2024-10-17 10:19:57.944906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.879 [2024-10-17 10:19:57.944915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.879 [2024-10-17 10:19:57.944923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:20:54.879 [2024-10-17 10:19:57.944930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:57.969186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:57.969222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.138 [2024-10-17 10:19:57.969238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:20:55.138 [2024-10-17 10:19:57.969247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:57.981032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:57.981070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.138 [2024-10-17 10:19:57.981080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.747 ms 00:20:55.138 [2024-10-17 10:19:57.981087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:57.992643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:57.992675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.138 [2024-10-17 10:19:57.992686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:20:55.138 [2024-10-17 10:19:57.992693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:57.993289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:57.993314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.138 [2024-10-17 10:19:57.993323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:20:55.138 [2024-10-17 10:19:57.993331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.049423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.049474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.138 [2024-10-17 10:19:58.049486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.074 ms 00:20:55.138 [2024-10-17 10:19:58.049498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.059848] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.138 [2024-10-17 10:19:58.062282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.062314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.138 [2024-10-17 10:19:58.062326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.743 ms 00:20:55.138 [2024-10-17 10:19:58.062335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.062420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.062430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.138 [2024-10-17 10:19:58.062439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:55.138 [2024-10-17 10:19:58.062446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.062510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.062521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.138 [2024-10-17 10:19:58.062529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:55.138 [2024-10-17 10:19:58.062537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.062555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.062563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.138 [2024-10-17 10:19:58.062571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.138 [2024-10-17 10:19:58.062578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.062606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.138 [2024-10-17 10:19:58.062616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.062625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.138 [2024-10-17 10:19:58.062633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:55.138 [2024-10-17 10:19:58.062640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.086259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.086295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.138 [2024-10-17 10:19:58.086307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.602 ms 00:20:55.138 [2024-10-17 10:19:58.086316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.086390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.138 [2024-10-17 10:19:58.086400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.138 [2024-10-17 10:19:58.086408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:55.138 [2024-10-17 10:19:58.086416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.138 [2024-10-17 10:19:58.087284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.823 ms, result 0 00:20:56.512  [2024-10-17T10:20:00.536Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-17T10:20:01.469Z] Copying: 31/1024 [MB] (17 MBps) [2024-10-17T10:20:02.401Z] Copying: 46/1024 [MB] (14 MBps) [2024-10-17T10:20:03.332Z] Copying: 57/1024 [MB] (11 MBps) [2024-10-17T10:20:04.312Z] Copying: 71/1024 [MB] (14 MBps) [2024-10-17T10:20:05.685Z] Copying: 88/1024 [MB] (16 MBps) [2024-10-17T10:20:06.620Z] Copying: 114/1024 [MB] (25 MBps) [2024-10-17T10:20:07.554Z] Copying: 134/1024 [MB] (19 MBps) [2024-10-17T10:20:08.489Z] Copying: 157/1024 [MB] (23 MBps) [2024-10-17T10:20:09.422Z] Copying: 176/1024 [MB] (18 MBps) [2024-10-17T10:20:10.357Z] Copying: 188/1024 [MB] (12 MBps) [2024-10-17T10:20:11.291Z] Copying: 200/1024 [MB] (11 MBps) [2024-10-17T10:20:12.698Z] Copying: 211/1024 [MB] (11 MBps) [2024-10-17T10:20:13.264Z] Copying: 224/1024 [MB] (12 MBps) [2024-10-17T10:20:14.636Z] Copying: 237/1024 [MB] (12 MBps) [2024-10-17T10:20:15.570Z] Copying: 249/1024 [MB] (12 MBps) [2024-10-17T10:20:16.504Z] Copying: 261/1024 [MB] (11 MBps) [2024-10-17T10:20:17.440Z] Copying: 272/1024 [MB] (11 MBps) [2024-10-17T10:20:18.419Z] Copying: 283/1024 [MB] (11 MBps) [2024-10-17T10:20:19.353Z] Copying: 294/1024 [MB] (11 MBps) [2024-10-17T10:20:20.285Z] Copying: 306/1024 [MB] (11 MBps) [2024-10-17T10:20:21.657Z] Copying: 317/1024 [MB] (11 MBps) [2024-10-17T10:20:22.590Z] Copying: 328/1024 [MB] (11 MBps) [2024-10-17T10:20:23.522Z] Copying: 339/1024 [MB] (11 MBps) [2024-10-17T10:20:24.455Z] Copying: 350/1024 [MB] (10 MBps) [2024-10-17T10:20:25.414Z] Copying: 362/1024 [MB] (11 MBps) [2024-10-17T10:20:26.348Z] Copying: 374/1024 [MB] (11 MBps) [2024-10-17T10:20:27.283Z] Copying: 384/1024 [MB] (10 MBps) [2024-10-17T10:20:28.657Z] Copying: 395/1024 [MB] (11 MBps) [2024-10-17T10:20:29.592Z] Copying: 407/1024 [MB] (11 MBps) [2024-10-17T10:20:30.526Z] Copying: 417/1024 [MB] (10 MBps) [2024-10-17T10:20:31.471Z] Copying: 428/1024 [MB] (10 MBps) [2024-10-17T10:20:32.405Z] Copying: 439/1024 [MB] (10 MBps) [2024-10-17T10:20:33.341Z] Copying: 450/1024 [MB] (11 MBps) [2024-10-17T10:20:34.276Z] Copying: 461/1024 [MB] (11 MBps) [2024-10-17T10:20:35.650Z] Copying: 472/1024 [MB] (11 MBps) [2024-10-17T10:20:36.583Z] Copying: 483/1024 [MB] (10 MBps) [2024-10-17T10:20:37.515Z] Copying: 494/1024 [MB] (10 MBps) [2024-10-17T10:20:38.500Z] Copying: 505/1024 [MB] (11 MBps) [2024-10-17T10:20:39.433Z] Copying: 516/1024 [MB] (11 MBps) [2024-10-17T10:20:40.367Z] Copying: 528/1024 [MB] (11 MBps) [2024-10-17T10:20:41.300Z] Copying: 539/1024 [MB] (11 MBps) [2024-10-17T10:20:42.672Z] Copying: 550/1024 [MB] (10 MBps) [2024-10-17T10:20:43.606Z] Copying: 561/1024 [MB] (10 MBps) [2024-10-17T10:20:44.542Z] Copying: 572/1024 [MB] (10 MBps) [2024-10-17T10:20:45.516Z] Copying: 582/1024 [MB] (10 MBps) [2024-10-17T10:20:46.451Z] Copying: 592/1024 [MB] (10 MBps) [2024-10-17T10:20:47.386Z] Copying: 603/1024 [MB] (10 MBps) [2024-10-17T10:20:48.323Z] Copying: 613/1024 [MB] (10 MBps) [2024-10-17T10:20:49.698Z] Copying: 624/1024 [MB] (10 MBps) [2024-10-17T10:20:50.265Z] Copying: 634/1024 [MB] (10 MBps) [2024-10-17T10:20:51.640Z] Copying: 644/1024 [MB] (10 MBps) [2024-10-17T10:20:52.575Z] Copying: 655/1024 [MB] (10 MBps) [2024-10-17T10:20:53.529Z] Copying: 666/1024 [MB] (10 MBps) [2024-10-17T10:20:54.463Z] Copying: 676/1024 [MB] (10 MBps) [2024-10-17T10:20:55.397Z] Copying: 686/1024 [MB] (10 MBps) [2024-10-17T10:20:56.330Z] Copying: 697/1024 [MB] (10 MBps) [2024-10-17T10:20:57.264Z] Copying: 707/1024 [MB] (10 MBps) [2024-10-17T10:20:58.639Z] Copying: 718/1024 [MB] (10 MBps) [2024-10-17T10:20:59.575Z] Copying: 730/1024 [MB] (11 MBps) [2024-10-17T10:21:00.508Z] Copying: 741/1024 [MB] (11 MBps) [2024-10-17T10:21:01.452Z] Copying: 753/1024 [MB] (11 MBps) [2024-10-17T10:21:02.387Z] Copying: 765/1024 [MB] (12 MBps) [2024-10-17T10:21:03.321Z] Copying: 776/1024 [MB] (11 MBps) [2024-10-17T10:21:04.693Z] Copying: 787/1024 [MB] (10 MBps) [2024-10-17T10:21:05.627Z] Copying: 797/1024 [MB] (10 MBps) [2024-10-17T10:21:06.562Z] Copying: 808/1024 [MB] (10 MBps) [2024-10-17T10:21:07.494Z] Copying: 818/1024 [MB] (10 MBps) [2024-10-17T10:21:08.429Z] Copying: 829/1024 [MB] (10 MBps) [2024-10-17T10:21:09.364Z] Copying: 841/1024 [MB] (11 MBps) [2024-10-17T10:21:10.298Z] Copying: 851/1024 [MB] (10 MBps) [2024-10-17T10:21:11.673Z] Copying: 862/1024 [MB] (10 MBps) [2024-10-17T10:21:12.608Z] Copying: 872/1024 [MB] (10 MBps) [2024-10-17T10:21:13.542Z] Copying: 883/1024 [MB] (10 MBps) [2024-10-17T10:21:14.478Z] Copying: 914948/1048576 [kB] (10228 kBps) [2024-10-17T10:21:15.414Z] Copying: 903/1024 [MB] (10 MBps) [2024-10-17T10:21:16.350Z] Copying: 935812/1048576 [kB] (10120 kBps) [2024-10-17T10:21:17.284Z] Copying: 946024/1048576 [kB] (10212 kBps) [2024-10-17T10:21:18.659Z] Copying: 933/1024 [MB] (10 MBps) [2024-10-17T10:21:19.594Z] Copying: 943/1024 [MB] (10 MBps) [2024-10-17T10:21:20.619Z] Copying: 954/1024 [MB] (10 MBps) [2024-10-17T10:21:21.554Z] Copying: 970/1024 [MB] (16 MBps) [2024-10-17T10:21:22.509Z] Copying: 981/1024 [MB] (11 MBps) [2024-10-17T10:21:23.450Z] Copying: 993/1024 [MB] (11 MBps) [2024-10-17T10:21:24.394Z] Copying: 1004/1024 [MB] (11 MBps) [2024-10-17T10:21:25.338Z] Copying: 1014/1024 [MB] (10 MBps) [2024-10-17T10:21:25.600Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-10-17 10:21:25.537257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.537375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:22.509 [2024-10-17 10:21:25.537397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:22.509 [2024-10-17 10:21:25.537408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.509 [2024-10-17 10:21:25.537438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.509 [2024-10-17 10:21:25.544256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.544322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:22.509 [2024-10-17 10:21:25.544339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:22:22.509 [2024-10-17 10:21:25.544353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.509 [2024-10-17 10:21:25.544722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.544752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:22.509 [2024-10-17 10:21:25.544766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:22.509 [2024-10-17 10:21:25.544779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.509 [2024-10-17 10:21:25.550076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.550110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:22.509 [2024-10-17 10:21:25.550124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.276 ms 00:22:22.509 [2024-10-17 10:21:25.550137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.509 [2024-10-17 10:21:25.556992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.557063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:22.509 [2024-10-17 10:21:25.557076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.798 ms 00:22:22.509 [2024-10-17 10:21:25.557086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.509 [2024-10-17 10:21:25.587000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.509 [2024-10-17 10:21:25.587078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:22.509 [2024-10-17 10:21:25.587093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.833 ms 00:22:22.509 [2024-10-17 10:21:25.587102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.604464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.604517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:22.772 [2024-10-17 10:21:25.604531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:22:22.772 [2024-10-17 10:21:25.604542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.604704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.604721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:22.772 [2024-10-17 10:21:25.604739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:22.772 [2024-10-17 10:21:25.604748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.632709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.632761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:22.772 [2024-10-17 10:21:25.632775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.943 ms 00:22:22.772 [2024-10-17 10:21:25.632783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.659117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.659182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:22.772 [2024-10-17 10:21:25.659196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.279 ms 00:22:22.772 [2024-10-17 10:21:25.659204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.684953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.685005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:22.772 [2024-10-17 10:21:25.685018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:22:22.772 [2024-10-17 10:21:25.685027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.711007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.772 [2024-10-17 10:21:25.711074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:22.772 [2024-10-17 10:21:25.711088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.871 ms 00:22:22.772 [2024-10-17 10:21:25.711097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.772 [2024-10-17 10:21:25.711148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:22.772 [2024-10-17 10:21:25.711168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:22.772 [2024-10-17 10:21:25.711306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.711992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:22.773 [2024-10-17 10:21:25.712070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:22.773 [2024-10-17 10:21:25.712086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21bd6557-859f-4c45-bb05-166d03987101 00:22:22.773 [2024-10-17 10:21:25.712099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:22.773 [2024-10-17 10:21:25.712111] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:22.773 [2024-10-17 10:21:25.712120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:22.773 [2024-10-17 10:21:25.712129] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:22.773 [2024-10-17 10:21:25.712139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:22.773 [2024-10-17 10:21:25.712150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:22.773 [2024-10-17 10:21:25.712168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:22.773 [2024-10-17 10:21:25.712176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:22.773 [2024-10-17 10:21:25.712183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:22.774 [2024-10-17 10:21:25.712191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.774 [2024-10-17 10:21:25.712200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:22.774 [2024-10-17 10:21:25.712210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:22:22.774 [2024-10-17 10:21:25.712220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.727147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.774 [2024-10-17 10:21:25.727193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:22.774 [2024-10-17 10:21:25.727206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.889 ms 00:22:22.774 [2024-10-17 10:21:25.727215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.727642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.774 [2024-10-17 10:21:25.727664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:22.774 [2024-10-17 10:21:25.727675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:22:22.774 [2024-10-17 10:21:25.727687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.767437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.774 [2024-10-17 10:21:25.767489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.774 [2024-10-17 10:21:25.767502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.774 [2024-10-17 10:21:25.767512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.767589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.774 [2024-10-17 10:21:25.767600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.774 [2024-10-17 10:21:25.767611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.774 [2024-10-17 10:21:25.767621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.767733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.774 [2024-10-17 10:21:25.767748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.774 [2024-10-17 10:21:25.767757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.774 [2024-10-17 10:21:25.767766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.774 [2024-10-17 10:21:25.767784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.774 [2024-10-17 10:21:25.767794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.774 [2024-10-17 10:21:25.767802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.774 [2024-10-17 10:21:25.767811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.861462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.861527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.036 [2024-10-17 10:21:25.861543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.861552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.036 [2024-10-17 10:21:25.937442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.937453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.036 [2024-10-17 10:21:25.937558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.937567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.036 [2024-10-17 10:21:25.937665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.937673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.036 [2024-10-17 10:21:25.937829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.937838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.036 [2024-10-17 10:21:25.937903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.937913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.937966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.937982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.036 [2024-10-17 10:21:25.937995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.938006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.938096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.036 [2024-10-17 10:21:25.938125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.036 [2024-10-17 10:21:25.938134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.036 [2024-10-17 10:21:25.938144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.036 [2024-10-17 10:21:25.938327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.027 ms, result 0 00:22:23.981 00:22:23.981 00:22:23.981 10:21:26 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:25.895 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:25.895 10:21:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:26.157 [2024-10-17 10:21:29.051507] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:22:26.157 [2024-10-17 10:21:29.051713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76453 ] 00:22:26.157 [2024-10-17 10:21:29.204196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.418 [2024-10-17 10:21:29.361927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.680 [2024-10-17 10:21:29.698193] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:26.680 [2024-10-17 10:21:29.698289] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:26.943 [2024-10-17 10:21:29.872069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.872137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:26.943 [2024-10-17 10:21:29.872154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:26.943 [2024-10-17 10:21:29.872169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.872224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.872236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.943 [2024-10-17 10:21:29.872246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:26.943 [2024-10-17 10:21:29.872257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.872278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:26.943 [2024-10-17 10:21:29.873012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:26.943 [2024-10-17 10:21:29.873041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.873072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.943 [2024-10-17 10:21:29.873082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:22:26.943 [2024-10-17 10:21:29.873090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.874962] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:26.943 [2024-10-17 10:21:29.889527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.889582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:26.943 [2024-10-17 10:21:29.889596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.567 ms 00:22:26.943 [2024-10-17 10:21:29.889605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.889680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.889690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:26.943 [2024-10-17 10:21:29.889702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:26.943 [2024-10-17 10:21:29.889710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.897799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.897849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.943 [2024-10-17 10:21:29.897861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.010 ms 00:22:26.943 [2024-10-17 10:21:29.897869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.897955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.897965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.943 [2024-10-17 10:21:29.897974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:26.943 [2024-10-17 10:21:29.897982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.898027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.898038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:26.943 [2024-10-17 10:21:29.898074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:26.943 [2024-10-17 10:21:29.898084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.898110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:26.943 [2024-10-17 10:21:29.902286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.902326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.943 [2024-10-17 10:21:29.902338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:22:26.943 [2024-10-17 10:21:29.902349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.902384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.902394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:26.943 [2024-10-17 10:21:29.902403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:26.943 [2024-10-17 10:21:29.902411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.902463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:26.943 [2024-10-17 10:21:29.902487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:26.943 [2024-10-17 10:21:29.902525] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:26.943 [2024-10-17 10:21:29.902546] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:26.943 [2024-10-17 10:21:29.902652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:26.943 [2024-10-17 10:21:29.902664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:26.943 [2024-10-17 10:21:29.902675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:26.943 [2024-10-17 10:21:29.902685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:26.943 [2024-10-17 10:21:29.902695] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:26.943 [2024-10-17 10:21:29.902704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:26.943 [2024-10-17 10:21:29.902712] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:26.943 [2024-10-17 10:21:29.902720] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:26.943 [2024-10-17 10:21:29.902729] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:26.943 [2024-10-17 10:21:29.902740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.902747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:26.943 [2024-10-17 10:21:29.902755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:26.943 [2024-10-17 10:21:29.902763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.902845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.943 [2024-10-17 10:21:29.902854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:26.943 [2024-10-17 10:21:29.902862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:26.943 [2024-10-17 10:21:29.902869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.943 [2024-10-17 10:21:29.902974] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:26.943 [2024-10-17 10:21:29.902987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:26.944 [2024-10-17 10:21:29.902997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:26.944 [2024-10-17 10:21:29.903020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:26.944 [2024-10-17 10:21:29.903042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.944 [2024-10-17 10:21:29.903076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:26.944 [2024-10-17 10:21:29.903086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:26.944 [2024-10-17 10:21:29.903093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:26.944 [2024-10-17 10:21:29.903101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:26.944 [2024-10-17 10:21:29.903109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:26.944 [2024-10-17 10:21:29.903122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:26.944 [2024-10-17 10:21:29.903137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:26.944 [2024-10-17 10:21:29.903161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:26.944 [2024-10-17 10:21:29.903182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:26.944 [2024-10-17 10:21:29.903203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:26.944 [2024-10-17 10:21:29.903225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:26.944 [2024-10-17 10:21:29.903247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.944 [2024-10-17 10:21:29.903261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:26.944 [2024-10-17 10:21:29.903268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:26.944 [2024-10-17 10:21:29.903274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:26.944 [2024-10-17 10:21:29.903281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:26.944 [2024-10-17 10:21:29.903289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:26.944 [2024-10-17 10:21:29.903296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:26.944 [2024-10-17 10:21:29.903310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:26.944 [2024-10-17 10:21:29.903317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903324] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:26.944 [2024-10-17 10:21:29.903333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:26.944 [2024-10-17 10:21:29.903341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:26.944 [2024-10-17 10:21:29.903357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:26.944 [2024-10-17 10:21:29.903364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:26.944 [2024-10-17 10:21:29.903372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:26.944 [2024-10-17 10:21:29.903378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:26.944 [2024-10-17 10:21:29.903385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:26.944 [2024-10-17 10:21:29.903393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:26.944 [2024-10-17 10:21:29.903402] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:26.944 [2024-10-17 10:21:29.903412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:26.944 [2024-10-17 10:21:29.903428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:26.944 [2024-10-17 10:21:29.903435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:26.944 [2024-10-17 10:21:29.903442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:26.944 [2024-10-17 10:21:29.903450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:26.944 [2024-10-17 10:21:29.903457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:26.944 [2024-10-17 10:21:29.903464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:26.944 [2024-10-17 10:21:29.903471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:26.944 [2024-10-17 10:21:29.903479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:26.944 [2024-10-17 10:21:29.903486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:26.944 [2024-10-17 10:21:29.903522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:26.944 [2024-10-17 10:21:29.903533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:26.944 [2024-10-17 10:21:29.903551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:26.944 [2024-10-17 10:21:29.903559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:26.944 [2024-10-17 10:21:29.903567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:26.944 [2024-10-17 10:21:29.903576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.944 [2024-10-17 10:21:29.903586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:26.944 [2024-10-17 10:21:29.903593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:22:26.944 [2024-10-17 10:21:29.903602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.944 [2024-10-17 10:21:29.936304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.944 [2024-10-17 10:21:29.936357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.944 [2024-10-17 10:21:29.936372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.656 ms 00:22:26.944 [2024-10-17 10:21:29.936380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.944 [2024-10-17 10:21:29.936477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.944 [2024-10-17 10:21:29.936486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:26.944 [2024-10-17 10:21:29.936496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:26.944 [2024-10-17 10:21:29.936504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.944 [2024-10-17 10:21:29.991120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.944 [2024-10-17 10:21:29.991177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.944 [2024-10-17 10:21:29.991191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.555 ms 00:22:26.944 [2024-10-17 10:21:29.991200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.944 [2024-10-17 10:21:29.991249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.945 [2024-10-17 10:21:29.991259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.945 [2024-10-17 10:21:29.991269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:26.945 [2024-10-17 10:21:29.991281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.945 [2024-10-17 10:21:29.991891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.945 [2024-10-17 10:21:29.991934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.945 [2024-10-17 10:21:29.991946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:22:26.945 [2024-10-17 10:21:29.991954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.945 [2024-10-17 10:21:29.992132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.945 [2024-10-17 10:21:29.992145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.945 [2024-10-17 10:21:29.992154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:22:26.945 [2024-10-17 10:21:29.992169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.945 [2024-10-17 10:21:30.008115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.945 [2024-10-17 10:21:30.008162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.945 [2024-10-17 10:21:30.008174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.926 ms 00:22:26.945 [2024-10-17 10:21:30.008185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.945 [2024-10-17 10:21:30.023030] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:26.945 [2024-10-17 10:21:30.023098] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:26.945 [2024-10-17 10:21:30.023112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.945 [2024-10-17 10:21:30.023121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:26.945 [2024-10-17 10:21:30.023132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.814 ms 00:22:26.945 [2024-10-17 10:21:30.023140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.207 [2024-10-17 10:21:30.049565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.207 [2024-10-17 10:21:30.049640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:27.207 [2024-10-17 10:21:30.049653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.368 ms 00:22:27.207 [2024-10-17 10:21:30.049662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.207 [2024-10-17 10:21:30.062635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.207 [2024-10-17 10:21:30.062686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:27.207 [2024-10-17 10:21:30.062698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:22:27.207 [2024-10-17 10:21:30.062706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.207 [2024-10-17 10:21:30.075007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.207 [2024-10-17 10:21:30.075068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:27.208 [2024-10-17 10:21:30.075081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.252 ms 00:22:27.208 [2024-10-17 10:21:30.075089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.075769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.075802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:27.208 [2024-10-17 10:21:30.075813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:22:27.208 [2024-10-17 10:21:30.075824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.143612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.143714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:27.208 [2024-10-17 10:21:30.143731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.766 ms 00:22:27.208 [2024-10-17 10:21:30.143747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.155723] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:27.208 [2024-10-17 10:21:30.159414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.159462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:27.208 [2024-10-17 10:21:30.159475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.601 ms 00:22:27.208 [2024-10-17 10:21:30.159484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.159591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.159602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:27.208 [2024-10-17 10:21:30.159612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:27.208 [2024-10-17 10:21:30.159621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.159697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.159708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:27.208 [2024-10-17 10:21:30.159717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:27.208 [2024-10-17 10:21:30.159725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.159747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.159758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:27.208 [2024-10-17 10:21:30.159767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:27.208 [2024-10-17 10:21:30.159775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.159809] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:27.208 [2024-10-17 10:21:30.159823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.159832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:27.208 [2024-10-17 10:21:30.159841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:27.208 [2024-10-17 10:21:30.159849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.186482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.186534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:27.208 [2024-10-17 10:21:30.186549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.614 ms 00:22:27.208 [2024-10-17 10:21:30.186564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.186655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.208 [2024-10-17 10:21:30.186665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:27.208 [2024-10-17 10:21:30.186675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:27.208 [2024-10-17 10:21:30.186683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.208 [2024-10-17 10:21:30.187929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.371 ms, result 0 00:22:28.151  [2024-10-17T10:21:32.626Z] Copying: 9320/1048576 [kB] (9320 kBps) [2024-10-17T10:21:33.571Z] Copying: 19228/1048576 [kB] (9908 kBps) [2024-10-17T10:21:34.516Z] Copying: 28872/1048576 [kB] (9644 kBps) [2024-10-17T10:21:35.459Z] Copying: 38460/1048576 [kB] (9588 kBps) [2024-10-17T10:21:36.421Z] Copying: 47712/1048576 [kB] (9252 kBps) [2024-10-17T10:21:37.369Z] Copying: 57544/1048576 [kB] (9832 kBps) [2024-10-17T10:21:38.312Z] Copying: 66720/1048576 [kB] (9176 kBps) [2024-10-17T10:21:39.254Z] Copying: 75648/1048576 [kB] (8928 kBps) [2024-10-17T10:21:40.628Z] Copying: 84640/1048576 [kB] (8992 kBps) [2024-10-17T10:21:41.560Z] Copying: 94648/1048576 [kB] (10008 kBps) [2024-10-17T10:21:42.498Z] Copying: 102/1024 [MB] (10 MBps) [2024-10-17T10:21:43.435Z] Copying: 113/1024 [MB] (10 MBps) [2024-10-17T10:21:44.370Z] Copying: 126288/1048576 [kB] (10156 kBps) [2024-10-17T10:21:45.321Z] Copying: 133/1024 [MB] (10 MBps) [2024-10-17T10:21:46.270Z] Copying: 147108/1048576 [kB] (10204 kBps) [2024-10-17T10:21:47.204Z] Copying: 154/1024 [MB] (11 MBps) [2024-10-17T10:21:48.575Z] Copying: 165/1024 [MB] (10 MBps) [2024-10-17T10:21:49.508Z] Copying: 175/1024 [MB] (10 MBps) [2024-10-17T10:21:50.442Z] Copying: 185/1024 [MB] (10 MBps) [2024-10-17T10:21:51.376Z] Copying: 196/1024 [MB] (10 MBps) [2024-10-17T10:21:52.311Z] Copying: 211152/1048576 [kB] (10088 kBps) [2024-10-17T10:21:53.243Z] Copying: 216/1024 [MB] (10 MBps) [2024-10-17T10:21:54.617Z] Copying: 226/1024 [MB] (10 MBps) [2024-10-17T10:21:55.551Z] Copying: 242204/1048576 [kB] (10212 kBps) [2024-10-17T10:21:56.485Z] Copying: 246/1024 [MB] (10 MBps) [2024-10-17T10:21:57.427Z] Copying: 257/1024 [MB] (10 MBps) [2024-10-17T10:21:58.363Z] Copying: 267/1024 [MB] (10 MBps) [2024-10-17T10:21:59.306Z] Copying: 277/1024 [MB] (10 MBps) [2024-10-17T10:22:00.282Z] Copying: 294084/1048576 [kB] (9892 kBps) [2024-10-17T10:22:01.213Z] Copying: 303800/1048576 [kB] (9716 kBps) [2024-10-17T10:22:02.585Z] Copying: 306/1024 [MB] (10 MBps) [2024-10-17T10:22:03.518Z] Copying: 317/1024 [MB] (10 MBps) [2024-10-17T10:22:04.456Z] Copying: 328/1024 [MB] (10 MBps) [2024-10-17T10:22:05.399Z] Copying: 339/1024 [MB] (10 MBps) [2024-10-17T10:22:06.345Z] Copying: 349/1024 [MB] (10 MBps) [2024-10-17T10:22:07.290Z] Copying: 368096/1048576 [kB] (10184 kBps) [2024-10-17T10:22:08.235Z] Copying: 378068/1048576 [kB] (9972 kBps) [2024-10-17T10:22:09.622Z] Copying: 387920/1048576 [kB] (9852 kBps) [2024-10-17T10:22:10.566Z] Copying: 388/1024 [MB] (10 MBps) [2024-10-17T10:22:11.509Z] Copying: 399/1024 [MB] (10 MBps) [2024-10-17T10:22:12.452Z] Copying: 418200/1048576 [kB] (9416 kBps) [2024-10-17T10:22:13.395Z] Copying: 427624/1048576 [kB] (9424 kBps) [2024-10-17T10:22:14.335Z] Copying: 437108/1048576 [kB] (9484 kBps) [2024-10-17T10:22:15.278Z] Copying: 446800/1048576 [kB] (9692 kBps) [2024-10-17T10:22:16.220Z] Copying: 457028/1048576 [kB] (10228 kBps) [2024-10-17T10:22:17.597Z] Copying: 466872/1048576 [kB] (9844 kBps) [2024-10-17T10:22:18.540Z] Copying: 466/1024 [MB] (10 MBps) [2024-10-17T10:22:19.478Z] Copying: 487504/1048576 [kB] (10200 kBps) [2024-10-17T10:22:20.412Z] Copying: 497216/1048576 [kB] (9712 kBps) [2024-10-17T10:22:21.347Z] Copying: 495/1024 [MB] (10 MBps) [2024-10-17T10:22:22.284Z] Copying: 505/1024 [MB] (10 MBps) [2024-10-17T10:22:23.224Z] Copying: 516/1024 [MB] (10 MBps) [2024-10-17T10:22:24.602Z] Copying: 538648/1048576 [kB] (10008 kBps) [2024-10-17T10:22:25.545Z] Copying: 536/1024 [MB] (10 MBps) [2024-10-17T10:22:26.483Z] Copying: 559368/1048576 [kB] (10104 kBps) [2024-10-17T10:22:27.431Z] Copying: 569176/1048576 [kB] (9808 kBps) [2024-10-17T10:22:28.382Z] Copying: 585/1024 [MB] (30 MBps) [2024-10-17T10:22:29.323Z] Copying: 630/1024 [MB] (44 MBps) [2024-10-17T10:22:30.258Z] Copying: 670/1024 [MB] (40 MBps) [2024-10-17T10:22:31.631Z] Copying: 682/1024 [MB] (12 MBps) [2024-10-17T10:22:32.564Z] Copying: 693/1024 [MB] (10 MBps) [2024-10-17T10:22:33.498Z] Copying: 707/1024 [MB] (14 MBps) [2024-10-17T10:22:34.430Z] Copying: 721/1024 [MB] (14 MBps) [2024-10-17T10:22:35.363Z] Copying: 738/1024 [MB] (16 MBps) [2024-10-17T10:22:36.296Z] Copying: 750/1024 [MB] (12 MBps) [2024-10-17T10:22:37.255Z] Copying: 772/1024 [MB] (21 MBps) [2024-10-17T10:22:38.626Z] Copying: 783/1024 [MB] (11 MBps) [2024-10-17T10:22:39.561Z] Copying: 809/1024 [MB] (25 MBps) [2024-10-17T10:22:40.501Z] Copying: 835/1024 [MB] (25 MBps) [2024-10-17T10:22:41.434Z] Copying: 849/1024 [MB] (14 MBps) [2024-10-17T10:22:42.367Z] Copying: 865/1024 [MB] (15 MBps) [2024-10-17T10:22:43.308Z] Copying: 891/1024 [MB] (25 MBps) [2024-10-17T10:22:44.242Z] Copying: 915/1024 [MB] (24 MBps) [2024-10-17T10:22:45.624Z] Copying: 941/1024 [MB] (25 MBps) [2024-10-17T10:22:46.602Z] Copying: 958/1024 [MB] (16 MBps) [2024-10-17T10:22:47.537Z] Copying: 975/1024 [MB] (17 MBps) [2024-10-17T10:22:48.474Z] Copying: 986/1024 [MB] (11 MBps) [2024-10-17T10:22:49.409Z] Copying: 999/1024 [MB] (12 MBps) [2024-10-17T10:22:50.344Z] Copying: 1015/1024 [MB] (15 MBps) [2024-10-17T10:22:50.603Z] Copying: 1048324/1048576 [kB] (8824 kBps) [2024-10-17T10:22:50.603Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-10-17 10:22:50.454378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.512 [2024-10-17 10:22:50.454452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:47.512 [2024-10-17 10:22:50.454469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:47.512 [2024-10-17 10:22:50.454486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.512 [2024-10-17 10:22:50.455343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:47.512 [2024-10-17 10:22:50.458366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.512 [2024-10-17 10:22:50.458395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:47.512 [2024-10-17 10:22:50.458408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:23:47.512 [2024-10-17 10:22:50.458417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.512 [2024-10-17 10:22:50.469224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.512 [2024-10-17 10:22:50.469257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:47.512 [2024-10-17 10:22:50.469268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:23:47.512 [2024-10-17 10:22:50.469276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.512 [2024-10-17 10:22:50.489845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.512 [2024-10-17 10:22:50.489878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:47.512 [2024-10-17 10:22:50.489889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.544 ms 00:23:47.512 [2024-10-17 10:22:50.489898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.512 [2024-10-17 10:22:50.495994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.513 [2024-10-17 10:22:50.496021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:47.513 [2024-10-17 10:22:50.496032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.071 ms 00:23:47.513 [2024-10-17 10:22:50.496040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.513 [2024-10-17 10:22:50.521226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.513 [2024-10-17 10:22:50.521260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:47.513 [2024-10-17 10:22:50.521273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.124 ms 00:23:47.513 [2024-10-17 10:22:50.521282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.513 [2024-10-17 10:22:50.536362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.513 [2024-10-17 10:22:50.536417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:47.513 [2024-10-17 10:22:50.536429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.046 ms 00:23:47.513 [2024-10-17 10:22:50.536437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.771 [2024-10-17 10:22:50.768400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.771 [2024-10-17 10:22:50.768466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:47.771 [2024-10-17 10:22:50.768482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 231.922 ms 00:23:47.771 [2024-10-17 10:22:50.768491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.772 [2024-10-17 10:22:50.794461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.772 [2024-10-17 10:22:50.794502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:47.772 [2024-10-17 10:22:50.794515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.955 ms 00:23:47.772 [2024-10-17 10:22:50.794524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.772 [2024-10-17 10:22:50.818213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.772 [2024-10-17 10:22:50.818264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:47.772 [2024-10-17 10:22:50.818276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.652 ms 00:23:47.772 [2024-10-17 10:22:50.818284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.772 [2024-10-17 10:22:50.841248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.772 [2024-10-17 10:22:50.841288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:47.772 [2024-10-17 10:22:50.841300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.927 ms 00:23:47.772 [2024-10-17 10:22:50.841309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-10-17 10:22:50.864877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.034 [2024-10-17 10:22:50.864917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.034 [2024-10-17 10:22:50.864930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.503 ms 00:23:48.034 [2024-10-17 10:22:50.864938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.034 [2024-10-17 10:22:50.864976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.034 [2024-10-17 10:22:50.864994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99328 / 261120 wr_cnt: 1 state: open 00:23:48.034 [2024-10-17 10:22:50.865008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.034 [2024-10-17 10:22:50.865668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.035 [2024-10-17 10:22:50.865882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.035 [2024-10-17 10:22:50.865891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21bd6557-859f-4c45-bb05-166d03987101 00:23:48.035 [2024-10-17 10:22:50.865900] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99328 00:23:48.035 [2024-10-17 10:22:50.865909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100288 00:23:48.035 [2024-10-17 10:22:50.865918] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99328 00:23:48.035 [2024-10-17 10:22:50.865928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:23:48.035 [2024-10-17 10:22:50.865936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.035 [2024-10-17 10:22:50.865945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.035 [2024-10-17 10:22:50.865966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.035 [2024-10-17 10:22:50.865974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.035 [2024-10-17 10:22:50.865981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.035 [2024-10-17 10:22:50.865992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.035 [2024-10-17 10:22:50.866001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.035 [2024-10-17 10:22:50.866011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:23:48.035 [2024-10-17 10:22:50.866019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.879862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.035 [2024-10-17 10:22:50.879900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.035 [2024-10-17 10:22:50.879913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.824 ms 00:23:48.035 [2024-10-17 10:22:50.879928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.880353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.035 [2024-10-17 10:22:50.880378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.035 [2024-10-17 10:22:50.880389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:23:48.035 [2024-10-17 10:22:50.880398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.918084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:50.918132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.035 [2024-10-17 10:22:50.918151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:50.918161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.918239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:50.918249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.035 [2024-10-17 10:22:50.918259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:50.918268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.918339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:50.918352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.035 [2024-10-17 10:22:50.918362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:50.918375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:50.918393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:50.918402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.035 [2024-10-17 10:22:50.918411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:50.918419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.009703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.009782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.035 [2024-10-17 10:22:51.009808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.009818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.083742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.083814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.035 [2024-10-17 10:22:51.083830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.083840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.083928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.083939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.035 [2024-10-17 10:22:51.083950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.083959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.084080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.035 [2024-10-17 10:22:51.084093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.084103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.084243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.035 [2024-10-17 10:22:51.084253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.084262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.084320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:48.035 [2024-10-17 10:22:51.084329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.084338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.084406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.035 [2024-10-17 10:22:51.084416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.084425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.035 [2024-10-17 10:22:51.084508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.035 [2024-10-17 10:22:51.084520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.035 [2024-10-17 10:22:51.084529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.035 [2024-10-17 10:22:51.084695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 632.602 ms, result 0 00:23:49.951 00:23:49.951 00:23:49.951 10:22:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:49.951 [2024-10-17 10:22:52.965624] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:23:49.951 [2024-10-17 10:22:52.965783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77312 ] 00:23:50.212 [2024-10-17 10:22:53.113172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.212 [2024-10-17 10:22:53.263224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.786 [2024-10-17 10:22:53.592638] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:50.786 [2024-10-17 10:22:53.592735] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:50.786 [2024-10-17 10:22:53.757131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.757196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:50.786 [2024-10-17 10:22:53.757214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:50.786 [2024-10-17 10:22:53.757229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.757290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.757302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.786 [2024-10-17 10:22:53.757311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:50.786 [2024-10-17 10:22:53.757323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.757359] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:50.786 [2024-10-17 10:22:53.758084] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:50.786 [2024-10-17 10:22:53.758115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.758129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.786 [2024-10-17 10:22:53.758139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:23:50.786 [2024-10-17 10:22:53.758148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.760409] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:50.786 [2024-10-17 10:22:53.775466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.775514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:50.786 [2024-10-17 10:22:53.775529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.060 ms 00:23:50.786 [2024-10-17 10:22:53.775538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.775620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.775631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:50.786 [2024-10-17 10:22:53.775645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:50.786 [2024-10-17 10:22:53.775653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.787024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.787076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.786 [2024-10-17 10:22:53.787088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.290 ms 00:23:50.786 [2024-10-17 10:22:53.787098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.786 [2024-10-17 10:22:53.787190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.786 [2024-10-17 10:22:53.787201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.786 [2024-10-17 10:22:53.787210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:50.787 [2024-10-17 10:22:53.787220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.787278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.787 [2024-10-17 10:22:53.787292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:50.787 [2024-10-17 10:22:53.787303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:50.787 [2024-10-17 10:22:53.787312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.787336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:50.787 [2024-10-17 10:22:53.791970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.787 [2024-10-17 10:22:53.792011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.787 [2024-10-17 10:22:53.792023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:23:50.787 [2024-10-17 10:22:53.792034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.792082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.787 [2024-10-17 10:22:53.792092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:50.787 [2024-10-17 10:22:53.792102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:50.787 [2024-10-17 10:22:53.792111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.792150] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:50.787 [2024-10-17 10:22:53.792178] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:50.787 [2024-10-17 10:22:53.792222] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:50.787 [2024-10-17 10:22:53.792250] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:50.787 [2024-10-17 10:22:53.792365] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:50.787 [2024-10-17 10:22:53.792379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:50.787 [2024-10-17 10:22:53.792391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:50.787 [2024-10-17 10:22:53.792403] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792423] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:50.787 [2024-10-17 10:22:53.792434] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:50.787 [2024-10-17 10:22:53.792443] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:50.787 [2024-10-17 10:22:53.792452] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:50.787 [2024-10-17 10:22:53.792464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.787 [2024-10-17 10:22:53.792473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:50.787 [2024-10-17 10:22:53.792483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:23:50.787 [2024-10-17 10:22:53.792491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.792576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.787 [2024-10-17 10:22:53.792587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:50.787 [2024-10-17 10:22:53.792595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:50.787 [2024-10-17 10:22:53.792603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.787 [2024-10-17 10:22:53.792712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:50.787 [2024-10-17 10:22:53.792737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:50.787 [2024-10-17 10:22:53.792746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:50.787 [2024-10-17 10:22:53.792774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:50.787 [2024-10-17 10:22:53.792799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:50.787 [2024-10-17 10:22:53.792820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:50.787 [2024-10-17 10:22:53.792828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:50.787 [2024-10-17 10:22:53.792836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:50.787 [2024-10-17 10:22:53.792845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:50.787 [2024-10-17 10:22:53.792854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:50.787 [2024-10-17 10:22:53.792869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:50.787 [2024-10-17 10:22:53.792883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:50.787 [2024-10-17 10:22:53.792905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:50.787 [2024-10-17 10:22:53.792925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:50.787 [2024-10-17 10:22:53.792944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:50.787 [2024-10-17 10:22:53.792966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:50.787 [2024-10-17 10:22:53.792981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:50.787 [2024-10-17 10:22:53.792988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:50.787 [2024-10-17 10:22:53.792995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:50.787 [2024-10-17 10:22:53.793001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:50.787 [2024-10-17 10:22:53.793008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:50.787 [2024-10-17 10:22:53.793016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:50.787 [2024-10-17 10:22:53.793023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:50.787 [2024-10-17 10:22:53.793031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:50.787 [2024-10-17 10:22:53.793039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.793068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:50.787 [2024-10-17 10:22:53.793076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:50.787 [2024-10-17 10:22:53.793083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.793089] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:50.787 [2024-10-17 10:22:53.793098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:50.787 [2024-10-17 10:22:53.793109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:50.787 [2024-10-17 10:22:53.793119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:50.787 [2024-10-17 10:22:53.793129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:50.787 [2024-10-17 10:22:53.793138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:50.787 [2024-10-17 10:22:53.793147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:50.787 [2024-10-17 10:22:53.793156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:50.787 [2024-10-17 10:22:53.793164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:50.787 [2024-10-17 10:22:53.793171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:50.787 [2024-10-17 10:22:53.793181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:50.787 [2024-10-17 10:22:53.793196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.787 [2024-10-17 10:22:53.793205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:50.787 [2024-10-17 10:22:53.793213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:50.787 [2024-10-17 10:22:53.793222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:50.787 [2024-10-17 10:22:53.793231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:50.788 [2024-10-17 10:22:53.793240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:50.788 [2024-10-17 10:22:53.793248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:50.788 [2024-10-17 10:22:53.793256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:50.788 [2024-10-17 10:22:53.793264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:50.788 [2024-10-17 10:22:53.793272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:50.788 [2024-10-17 10:22:53.793279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:50.788 [2024-10-17 10:22:53.793315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:50.788 [2024-10-17 10:22:53.793327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:50.788 [2024-10-17 10:22:53.793357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:50.788 [2024-10-17 10:22:53.793365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:50.788 [2024-10-17 10:22:53.793372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:50.788 [2024-10-17 10:22:53.793381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.788 [2024-10-17 10:22:53.793390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:50.788 [2024-10-17 10:22:53.793403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:23:50.788 [2024-10-17 10:22:53.793412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.788 [2024-10-17 10:22:53.831423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.788 [2024-10-17 10:22:53.831471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.788 [2024-10-17 10:22:53.831485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.960 ms 00:23:50.788 [2024-10-17 10:22:53.831494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.788 [2024-10-17 10:22:53.831595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.788 [2024-10-17 10:22:53.831605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:50.788 [2024-10-17 10:22:53.831614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:50.788 [2024-10-17 10:22:53.831624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.883997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.884063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.050 [2024-10-17 10:22:53.884078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.312 ms 00:23:51.050 [2024-10-17 10:22:53.884088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.884140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.884151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.050 [2024-10-17 10:22:53.884163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:51.050 [2024-10-17 10:22:53.884177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.884924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.884961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.050 [2024-10-17 10:22:53.884973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:23:51.050 [2024-10-17 10:22:53.884983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.885185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.885200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.050 [2024-10-17 10:22:53.885211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:51.050 [2024-10-17 10:22:53.885228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.903371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.903416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.050 [2024-10-17 10:22:53.903429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:23:51.050 [2024-10-17 10:22:53.903440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.918881] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:51.050 [2024-10-17 10:22:53.918930] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:51.050 [2024-10-17 10:22:53.918944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.918954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:51.050 [2024-10-17 10:22:53.918965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.388 ms 00:23:51.050 [2024-10-17 10:22:53.918973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.944933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.944988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:51.050 [2024-10-17 10:22:53.945001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.904 ms 00:23:51.050 [2024-10-17 10:22:53.945010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.958162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.958217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:51.050 [2024-10-17 10:22:53.958229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:23:51.050 [2024-10-17 10:22:53.958237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.970912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.970957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:51.050 [2024-10-17 10:22:53.970970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:23:51.050 [2024-10-17 10:22:53.970980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:53.971654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:53.971691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:51.050 [2024-10-17 10:22:53.971702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:23:51.050 [2024-10-17 10:22:53.971714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.042970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.043030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:51.050 [2024-10-17 10:22:54.043068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.234 ms 00:23:51.050 [2024-10-17 10:22:54.043086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.054734] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:51.050 [2024-10-17 10:22:54.058175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.058218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:51.050 [2024-10-17 10:22:54.058231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.032 ms 00:23:51.050 [2024-10-17 10:22:54.058241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.058332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.058347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:51.050 [2024-10-17 10:22:54.058360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:51.050 [2024-10-17 10:22:54.058370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.060491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.060540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:51.050 [2024-10-17 10:22:54.060553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:23:51.050 [2024-10-17 10:22:54.060562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.060603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.060614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:51.050 [2024-10-17 10:22:54.060625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:51.050 [2024-10-17 10:22:54.060635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.060681] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:51.050 [2024-10-17 10:22:54.060697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.060707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:51.050 [2024-10-17 10:22:54.060716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:51.050 [2024-10-17 10:22:54.060725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.050 [2024-10-17 10:22:54.087288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.050 [2024-10-17 10:22:54.087335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:51.051 [2024-10-17 10:22:54.087350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.543 ms 00:23:51.051 [2024-10-17 10:22:54.087365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.051 [2024-10-17 10:22:54.087461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.051 [2024-10-17 10:22:54.087474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:51.051 [2024-10-17 10:22:54.087485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:51.051 [2024-10-17 10:22:54.087494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.051 [2024-10-17 10:22:54.089003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.301 ms, result 0 00:23:52.436  [2024-10-17T10:22:56.463Z] Copying: 7880/1048576 [kB] (7880 kBps) [2024-10-17T10:22:57.457Z] Copying: 28/1024 [MB] (20 MBps) [2024-10-17T10:22:58.397Z] Copying: 68/1024 [MB] (40 MBps) [2024-10-17T10:22:59.339Z] Copying: 109/1024 [MB] (40 MBps) [2024-10-17T10:23:00.726Z] Copying: 150/1024 [MB] (41 MBps) [2024-10-17T10:23:01.298Z] Copying: 192/1024 [MB] (42 MBps) [2024-10-17T10:23:02.681Z] Copying: 233/1024 [MB] (41 MBps) [2024-10-17T10:23:03.625Z] Copying: 277/1024 [MB] (43 MBps) [2024-10-17T10:23:04.569Z] Copying: 320/1024 [MB] (42 MBps) [2024-10-17T10:23:05.511Z] Copying: 365/1024 [MB] (44 MBps) [2024-10-17T10:23:06.455Z] Copying: 415/1024 [MB] (50 MBps) [2024-10-17T10:23:07.397Z] Copying: 463/1024 [MB] (47 MBps) [2024-10-17T10:23:08.342Z] Copying: 512/1024 [MB] (48 MBps) [2024-10-17T10:23:09.297Z] Copying: 564/1024 [MB] (52 MBps) [2024-10-17T10:23:10.683Z] Copying: 615/1024 [MB] (51 MBps) [2024-10-17T10:23:11.627Z] Copying: 662/1024 [MB] (47 MBps) [2024-10-17T10:23:12.571Z] Copying: 709/1024 [MB] (46 MBps) [2024-10-17T10:23:13.516Z] Copying: 758/1024 [MB] (49 MBps) [2024-10-17T10:23:14.460Z] Copying: 808/1024 [MB] (49 MBps) [2024-10-17T10:23:15.433Z] Copying: 855/1024 [MB] (47 MBps) [2024-10-17T10:23:16.376Z] Copying: 903/1024 [MB] (48 MBps) [2024-10-17T10:23:17.319Z] Copying: 953/1024 [MB] (50 MBps) [2024-10-17T10:23:17.891Z] Copying: 1003/1024 [MB] (49 MBps) [2024-10-17T10:23:17.891Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-10-17 10:23:17.781569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.781648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:14.800 [2024-10-17 10:23:17.781669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:14.800 [2024-10-17 10:23:17.781683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.781723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:14.800 [2024-10-17 10:23:17.785868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.785910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:14.800 [2024-10-17 10:23:17.785926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:24:14.800 [2024-10-17 10:23:17.785939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.786282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.786313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:14.800 [2024-10-17 10:23:17.786326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:24:14.800 [2024-10-17 10:23:17.786338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.795182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.795213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:14.800 [2024-10-17 10:23:17.795223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.818 ms 00:24:14.800 [2024-10-17 10:23:17.795231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.801322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.801351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:14.800 [2024-10-17 10:23:17.801362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:24:14.800 [2024-10-17 10:23:17.801370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.825228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.825260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:14.800 [2024-10-17 10:23:17.825271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.815 ms 00:24:14.800 [2024-10-17 10:23:17.825279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.800 [2024-10-17 10:23:17.840008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.800 [2024-10-17 10:23:17.840044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:14.800 [2024-10-17 10:23:17.840068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.695 ms 00:24:14.800 [2024-10-17 10:23:17.840076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.896964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.063 [2024-10-17 10:23:17.897015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:15.063 [2024-10-17 10:23:17.897028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.863 ms 00:24:15.063 [2024-10-17 10:23:17.897037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.920401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.063 [2024-10-17 10:23:17.920433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:15.063 [2024-10-17 10:23:17.920443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.339 ms 00:24:15.063 [2024-10-17 10:23:17.920452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.943156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.063 [2024-10-17 10:23:17.943187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:15.063 [2024-10-17 10:23:17.943207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.672 ms 00:24:15.063 [2024-10-17 10:23:17.943216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.965946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.063 [2024-10-17 10:23:17.965976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:15.063 [2024-10-17 10:23:17.965986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.699 ms 00:24:15.063 [2024-10-17 10:23:17.965993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.988855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.063 [2024-10-17 10:23:17.988884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:15.063 [2024-10-17 10:23:17.988894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.808 ms 00:24:15.063 [2024-10-17 10:23:17.988902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.063 [2024-10-17 10:23:17.988932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:15.063 [2024-10-17 10:23:17.988946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:15.063 [2024-10-17 10:23:17.988956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.988965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.988972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.988979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.988987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.988994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:15.063 [2024-10-17 10:23:17.989162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:15.064 [2024-10-17 10:23:17.989740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:15.064 [2024-10-17 10:23:17.989749] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21bd6557-859f-4c45-bb05-166d03987101 00:24:15.064 [2024-10-17 10:23:17.989757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:15.064 [2024-10-17 10:23:17.989764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32704 00:24:15.064 [2024-10-17 10:23:17.989771] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 31744 00:24:15.064 [2024-10-17 10:23:17.989780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0302 00:24:15.064 [2024-10-17 10:23:17.989786] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:15.064 [2024-10-17 10:23:17.989794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:15.064 [2024-10-17 10:23:17.989803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:15.064 [2024-10-17 10:23:17.989817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:15.064 [2024-10-17 10:23:17.989823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:15.064 [2024-10-17 10:23:17.989831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.064 [2024-10-17 10:23:17.989838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:15.064 [2024-10-17 10:23:17.989847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:24:15.064 [2024-10-17 10:23:17.989854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.064 [2024-10-17 10:23:18.002891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.064 [2024-10-17 10:23:18.002920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:15.064 [2024-10-17 10:23:18.002931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.008 ms 00:24:15.064 [2024-10-17 10:23:18.002940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.064 [2024-10-17 10:23:18.003331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.064 [2024-10-17 10:23:18.003347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:15.064 [2024-10-17 10:23:18.003356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:24:15.064 [2024-10-17 10:23:18.003364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.064 [2024-10-17 10:23:18.038045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.064 [2024-10-17 10:23:18.038083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.064 [2024-10-17 10:23:18.038097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.064 [2024-10-17 10:23:18.038105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.065 [2024-10-17 10:23:18.038154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.065 [2024-10-17 10:23:18.038162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.065 [2024-10-17 10:23:18.038171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.065 [2024-10-17 10:23:18.038178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.065 [2024-10-17 10:23:18.038224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.065 [2024-10-17 10:23:18.038234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.065 [2024-10-17 10:23:18.038241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.065 [2024-10-17 10:23:18.038253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.065 [2024-10-17 10:23:18.038268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.065 [2024-10-17 10:23:18.038275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.065 [2024-10-17 10:23:18.038283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.065 [2024-10-17 10:23:18.038289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.065 [2024-10-17 10:23:18.120865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.065 [2024-10-17 10:23:18.120904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.065 [2024-10-17 10:23:18.120920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.065 [2024-10-17 10:23:18.120929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.325 [2024-10-17 10:23:18.187977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.325 [2024-10-17 10:23:18.188019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.325 [2024-10-17 10:23:18.188031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.325 [2024-10-17 10:23:18.188039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.325 [2024-10-17 10:23:18.188136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.325 [2024-10-17 10:23:18.188146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.325 [2024-10-17 10:23:18.188155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.326 [2024-10-17 10:23:18.188207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.326 [2024-10-17 10:23:18.188215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.326 [2024-10-17 10:23:18.188336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.326 [2024-10-17 10:23:18.188345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.326 [2024-10-17 10:23:18.188401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:15.326 [2024-10-17 10:23:18.188409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.326 [2024-10-17 10:23:18.188466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.326 [2024-10-17 10:23:18.188474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.326 [2024-10-17 10:23:18.188540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.326 [2024-10-17 10:23:18.188549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.326 [2024-10-17 10:23:18.188556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.326 [2024-10-17 10:23:18.188673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.086 ms, result 0 00:24:16.270 00:24:16.270 00:24:16.270 10:23:19 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:18.184 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:18.184 10:23:20 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:18.184 10:23:20 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:18.184 10:23:20 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:18.184 Process with pid 74618 is not found 00:24:18.184 Remove shared memory files 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74618 00:24:18.184 10:23:21 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74618 ']' 00:24:18.184 10:23:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74618 00:24:18.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74618) - No such process 00:24:18.184 10:23:21 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74618 is not found' 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:18.184 10:23:21 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:18.184 00:24:18.184 real 4m48.115s 00:24:18.184 user 4m36.459s 00:24:18.184 sys 0m12.129s 00:24:18.184 10:23:21 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.184 10:23:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:18.184 ************************************ 00:24:18.184 END TEST ftl_restore 00:24:18.184 ************************************ 00:24:18.184 10:23:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:18.184 10:23:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:18.184 10:23:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.184 10:23:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:18.184 ************************************ 00:24:18.184 START TEST ftl_dirty_shutdown 00:24:18.184 ************************************ 00:24:18.184 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:18.184 * Looking for test storage... 00:24:18.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:18.184 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.184 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.184 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.446 --rc genhtml_branch_coverage=1 00:24:18.446 --rc genhtml_function_coverage=1 00:24:18.446 --rc genhtml_legend=1 00:24:18.446 --rc geninfo_all_blocks=1 00:24:18.446 --rc geninfo_unexecuted_blocks=1 00:24:18.446 00:24:18.446 ' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.446 --rc genhtml_branch_coverage=1 00:24:18.446 --rc genhtml_function_coverage=1 00:24:18.446 --rc genhtml_legend=1 00:24:18.446 --rc geninfo_all_blocks=1 00:24:18.446 --rc geninfo_unexecuted_blocks=1 00:24:18.446 00:24:18.446 ' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.446 --rc genhtml_branch_coverage=1 00:24:18.446 --rc genhtml_function_coverage=1 00:24:18.446 --rc genhtml_legend=1 00:24:18.446 --rc geninfo_all_blocks=1 00:24:18.446 --rc geninfo_unexecuted_blocks=1 00:24:18.446 00:24:18.446 ' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.446 --rc genhtml_branch_coverage=1 00:24:18.446 --rc genhtml_function_coverage=1 00:24:18.446 --rc genhtml_legend=1 00:24:18.446 --rc geninfo_all_blocks=1 00:24:18.446 --rc geninfo_unexecuted_blocks=1 00:24:18.446 00:24:18.446 ' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:18.446 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77673 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77673 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77673 ']' 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.447 10:23:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:18.447 [2024-10-17 10:23:21.441543] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:24:18.447 [2024-10-17 10:23:21.441678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77673 ] 00:24:18.709 [2024-10-17 10:23:21.591003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.709 [2024-10-17 10:23:21.712155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:19.652 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:19.913 10:23:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:20.175 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:20.175 { 00:24:20.175 "name": "nvme0n1", 00:24:20.175 "aliases": [ 00:24:20.175 "6d812992-ca03-4d49-93c7-5269144b4705" 00:24:20.175 ], 00:24:20.175 "product_name": "NVMe disk", 00:24:20.175 "block_size": 4096, 00:24:20.175 "num_blocks": 1310720, 00:24:20.175 "uuid": "6d812992-ca03-4d49-93c7-5269144b4705", 00:24:20.175 "numa_id": -1, 00:24:20.175 "assigned_rate_limits": { 00:24:20.175 "rw_ios_per_sec": 0, 00:24:20.175 "rw_mbytes_per_sec": 0, 00:24:20.175 "r_mbytes_per_sec": 0, 00:24:20.175 "w_mbytes_per_sec": 0 00:24:20.175 }, 00:24:20.175 "claimed": true, 00:24:20.175 "claim_type": "read_many_write_one", 00:24:20.175 "zoned": false, 00:24:20.175 "supported_io_types": { 00:24:20.175 "read": true, 00:24:20.175 "write": true, 00:24:20.175 "unmap": true, 00:24:20.175 "flush": true, 00:24:20.175 "reset": true, 00:24:20.175 "nvme_admin": true, 00:24:20.175 "nvme_io": true, 00:24:20.175 "nvme_io_md": false, 00:24:20.175 "write_zeroes": true, 00:24:20.175 "zcopy": false, 00:24:20.175 "get_zone_info": false, 00:24:20.175 "zone_management": false, 00:24:20.175 "zone_append": false, 00:24:20.175 "compare": true, 00:24:20.175 "compare_and_write": false, 00:24:20.175 "abort": true, 00:24:20.175 "seek_hole": false, 00:24:20.175 "seek_data": false, 00:24:20.175 "copy": true, 00:24:20.175 "nvme_iov_md": false 00:24:20.175 }, 00:24:20.175 "driver_specific": { 00:24:20.175 "nvme": [ 00:24:20.175 { 00:24:20.175 "pci_address": "0000:00:11.0", 00:24:20.175 "trid": { 00:24:20.175 "trtype": "PCIe", 00:24:20.175 "traddr": "0000:00:11.0" 00:24:20.175 }, 00:24:20.175 "ctrlr_data": { 00:24:20.175 "cntlid": 0, 00:24:20.175 "vendor_id": "0x1b36", 00:24:20.175 "model_number": "QEMU NVMe Ctrl", 00:24:20.175 "serial_number": "12341", 00:24:20.175 "firmware_revision": "8.0.0", 00:24:20.175 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:20.175 "oacs": { 00:24:20.175 "security": 0, 00:24:20.175 "format": 1, 00:24:20.175 "firmware": 0, 00:24:20.175 "ns_manage": 1 00:24:20.175 }, 00:24:20.175 "multi_ctrlr": false, 00:24:20.175 "ana_reporting": false 00:24:20.175 }, 00:24:20.175 "vs": { 00:24:20.175 "nvme_version": "1.4" 00:24:20.175 }, 00:24:20.175 "ns_data": { 00:24:20.175 "id": 1, 00:24:20.175 "can_share": false 00:24:20.175 } 00:24:20.175 } 00:24:20.175 ], 00:24:20.175 "mp_policy": "active_passive" 00:24:20.175 } 00:24:20.176 } 00:24:20.176 ]' 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:20.176 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:20.437 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c3ef2a31-3604-49a5-990c-511d6190e987 00:24:20.437 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:20.437 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3ef2a31-3604-49a5-990c-511d6190e987 00:24:20.724 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:20.724 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0765e629-0a8f-44f0-9f67-5df86e49f823 00:24:20.724 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0765e629-0a8f-44f0-9f67-5df86e49f823 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:20.995 10:23:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:21.257 { 00:24:21.257 "name": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:21.257 "aliases": [ 00:24:21.257 "lvs/nvme0n1p0" 00:24:21.257 ], 00:24:21.257 "product_name": "Logical Volume", 00:24:21.257 "block_size": 4096, 00:24:21.257 "num_blocks": 26476544, 00:24:21.257 "uuid": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:21.257 "assigned_rate_limits": { 00:24:21.257 "rw_ios_per_sec": 0, 00:24:21.257 "rw_mbytes_per_sec": 0, 00:24:21.257 "r_mbytes_per_sec": 0, 00:24:21.257 "w_mbytes_per_sec": 0 00:24:21.257 }, 00:24:21.257 "claimed": false, 00:24:21.257 "zoned": false, 00:24:21.257 "supported_io_types": { 00:24:21.257 "read": true, 00:24:21.257 "write": true, 00:24:21.257 "unmap": true, 00:24:21.257 "flush": false, 00:24:21.257 "reset": true, 00:24:21.257 "nvme_admin": false, 00:24:21.257 "nvme_io": false, 00:24:21.257 "nvme_io_md": false, 00:24:21.257 "write_zeroes": true, 00:24:21.257 "zcopy": false, 00:24:21.257 "get_zone_info": false, 00:24:21.257 "zone_management": false, 00:24:21.257 "zone_append": false, 00:24:21.257 "compare": false, 00:24:21.257 "compare_and_write": false, 00:24:21.257 "abort": false, 00:24:21.257 "seek_hole": true, 00:24:21.257 "seek_data": true, 00:24:21.257 "copy": false, 00:24:21.257 "nvme_iov_md": false 00:24:21.257 }, 00:24:21.257 "driver_specific": { 00:24:21.257 "lvol": { 00:24:21.257 "lvol_store_uuid": "0765e629-0a8f-44f0-9f67-5df86e49f823", 00:24:21.257 "base_bdev": "nvme0n1", 00:24:21.257 "thin_provision": true, 00:24:21.257 "num_allocated_clusters": 0, 00:24:21.257 "snapshot": false, 00:24:21.257 "clone": false, 00:24:21.257 "esnap_clone": false 00:24:21.257 } 00:24:21.257 } 00:24:21.257 } 00:24:21.257 ]' 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:21.257 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:21.518 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:21.518 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:21.519 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:21.780 { 00:24:21.780 "name": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:21.780 "aliases": [ 00:24:21.780 "lvs/nvme0n1p0" 00:24:21.780 ], 00:24:21.780 "product_name": "Logical Volume", 00:24:21.780 "block_size": 4096, 00:24:21.780 "num_blocks": 26476544, 00:24:21.780 "uuid": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:21.780 "assigned_rate_limits": { 00:24:21.780 "rw_ios_per_sec": 0, 00:24:21.780 "rw_mbytes_per_sec": 0, 00:24:21.780 "r_mbytes_per_sec": 0, 00:24:21.780 "w_mbytes_per_sec": 0 00:24:21.780 }, 00:24:21.780 "claimed": false, 00:24:21.780 "zoned": false, 00:24:21.780 "supported_io_types": { 00:24:21.780 "read": true, 00:24:21.780 "write": true, 00:24:21.780 "unmap": true, 00:24:21.780 "flush": false, 00:24:21.780 "reset": true, 00:24:21.780 "nvme_admin": false, 00:24:21.780 "nvme_io": false, 00:24:21.780 "nvme_io_md": false, 00:24:21.780 "write_zeroes": true, 00:24:21.780 "zcopy": false, 00:24:21.780 "get_zone_info": false, 00:24:21.780 "zone_management": false, 00:24:21.780 "zone_append": false, 00:24:21.780 "compare": false, 00:24:21.780 "compare_and_write": false, 00:24:21.780 "abort": false, 00:24:21.780 "seek_hole": true, 00:24:21.780 "seek_data": true, 00:24:21.780 "copy": false, 00:24:21.780 "nvme_iov_md": false 00:24:21.780 }, 00:24:21.780 "driver_specific": { 00:24:21.780 "lvol": { 00:24:21.780 "lvol_store_uuid": "0765e629-0a8f-44f0-9f67-5df86e49f823", 00:24:21.780 "base_bdev": "nvme0n1", 00:24:21.780 "thin_provision": true, 00:24:21.780 "num_allocated_clusters": 0, 00:24:21.780 "snapshot": false, 00:24:21.780 "clone": false, 00:24:21.780 "esnap_clone": false 00:24:21.780 } 00:24:21.780 } 00:24:21.780 } 00:24:21.780 ]' 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:21.780 10:23:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:22.039 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb 00:24:22.297 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:22.297 { 00:24:22.297 "name": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:22.297 "aliases": [ 00:24:22.297 "lvs/nvme0n1p0" 00:24:22.297 ], 00:24:22.297 "product_name": "Logical Volume", 00:24:22.297 "block_size": 4096, 00:24:22.297 "num_blocks": 26476544, 00:24:22.297 "uuid": "74ed82d7-271c-4b00-bca7-5dbb5b7db3bb", 00:24:22.297 "assigned_rate_limits": { 00:24:22.297 "rw_ios_per_sec": 0, 00:24:22.297 "rw_mbytes_per_sec": 0, 00:24:22.297 "r_mbytes_per_sec": 0, 00:24:22.297 "w_mbytes_per_sec": 0 00:24:22.297 }, 00:24:22.297 "claimed": false, 00:24:22.297 "zoned": false, 00:24:22.298 "supported_io_types": { 00:24:22.298 "read": true, 00:24:22.298 "write": true, 00:24:22.298 "unmap": true, 00:24:22.298 "flush": false, 00:24:22.298 "reset": true, 00:24:22.298 "nvme_admin": false, 00:24:22.298 "nvme_io": false, 00:24:22.298 "nvme_io_md": false, 00:24:22.298 "write_zeroes": true, 00:24:22.298 "zcopy": false, 00:24:22.298 "get_zone_info": false, 00:24:22.298 "zone_management": false, 00:24:22.298 "zone_append": false, 00:24:22.298 "compare": false, 00:24:22.298 "compare_and_write": false, 00:24:22.298 "abort": false, 00:24:22.298 "seek_hole": true, 00:24:22.298 "seek_data": true, 00:24:22.298 "copy": false, 00:24:22.298 "nvme_iov_md": false 00:24:22.298 }, 00:24:22.298 "driver_specific": { 00:24:22.298 "lvol": { 00:24:22.298 "lvol_store_uuid": "0765e629-0a8f-44f0-9f67-5df86e49f823", 00:24:22.298 "base_bdev": "nvme0n1", 00:24:22.298 "thin_provision": true, 00:24:22.298 "num_allocated_clusters": 0, 00:24:22.298 "snapshot": false, 00:24:22.298 "clone": false, 00:24:22.298 "esnap_clone": false 00:24:22.298 } 00:24:22.298 } 00:24:22.298 } 00:24:22.298 ]' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb --l2p_dram_limit 10' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:22.298 10:23:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74ed82d7-271c-4b00-bca7-5dbb5b7db3bb --l2p_dram_limit 10 -c nvc0n1p0 00:24:22.556 [2024-10-17 10:23:25.500957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.556 [2024-10-17 10:23:25.501011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.556 [2024-10-17 10:23:25.501026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.556 [2024-10-17 10:23:25.501033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.556 [2024-10-17 10:23:25.501088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.556 [2024-10-17 10:23:25.501099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.556 [2024-10-17 10:23:25.501107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:22.556 [2024-10-17 10:23:25.501114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.556 [2024-10-17 10:23:25.501135] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.556 [2024-10-17 10:23:25.501673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.556 [2024-10-17 10:23:25.501696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.556 [2024-10-17 10:23:25.501703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.556 [2024-10-17 10:23:25.501712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:24:22.556 [2024-10-17 10:23:25.501718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.556 [2024-10-17 10:23:25.501771] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 420ae77d-e8d2-4328-83d9-23d3af78b30f 00:24:22.556 [2024-10-17 10:23:25.503034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.556 [2024-10-17 10:23:25.503075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:22.556 [2024-10-17 10:23:25.503084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:22.557 [2024-10-17 10:23:25.503094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.509822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.509852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.557 [2024-10-17 10:23:25.509862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:24:22.557 [2024-10-17 10:23:25.509869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.509941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.509951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.557 [2024-10-17 10:23:25.509959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:22.557 [2024-10-17 10:23:25.509970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.510008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.510023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.557 [2024-10-17 10:23:25.510030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:22.557 [2024-10-17 10:23:25.510039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.510069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.557 [2024-10-17 10:23:25.513349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.513375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.557 [2024-10-17 10:23:25.513384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:24:22.557 [2024-10-17 10:23:25.513394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.513421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.513429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.557 [2024-10-17 10:23:25.513438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.557 [2024-10-17 10:23:25.513444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.513459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:22.557 [2024-10-17 10:23:25.513571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.557 [2024-10-17 10:23:25.513586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.557 [2024-10-17 10:23:25.513595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.557 [2024-10-17 10:23:25.513604] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.557 [2024-10-17 10:23:25.513612] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.557 [2024-10-17 10:23:25.513620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.557 [2024-10-17 10:23:25.513627] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.557 [2024-10-17 10:23:25.513635] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.557 [2024-10-17 10:23:25.513641] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.557 [2024-10-17 10:23:25.513648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.513656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.557 [2024-10-17 10:23:25.513664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:22.557 [2024-10-17 10:23:25.513675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.513743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.557 [2024-10-17 10:23:25.513756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.557 [2024-10-17 10:23:25.513763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:22.557 [2024-10-17 10:23:25.513769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.557 [2024-10-17 10:23:25.513846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.557 [2024-10-17 10:23:25.513860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.557 [2024-10-17 10:23:25.513870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.557 [2024-10-17 10:23:25.513876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.557 [2024-10-17 10:23:25.513891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.557 [2024-10-17 10:23:25.513905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.557 [2024-10-17 10:23:25.513912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.557 [2024-10-17 10:23:25.513924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.557 [2024-10-17 10:23:25.513930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.557 [2024-10-17 10:23:25.513936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.557 [2024-10-17 10:23:25.513942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.557 [2024-10-17 10:23:25.513949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.557 [2024-10-17 10:23:25.513954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.557 [2024-10-17 10:23:25.513971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.557 [2024-10-17 10:23:25.513978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.557 [2024-10-17 10:23:25.513992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.557 [2024-10-17 10:23:25.513997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.557 [2024-10-17 10:23:25.514009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.557 [2024-10-17 10:23:25.514030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.557 [2024-10-17 10:23:25.514063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.557 [2024-10-17 10:23:25.514084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.557 [2024-10-17 10:23:25.514097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.557 [2024-10-17 10:23:25.514102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.557 [2024-10-17 10:23:25.514109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.557 [2024-10-17 10:23:25.514115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.557 [2024-10-17 10:23:25.514123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.557 [2024-10-17 10:23:25.514128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.557 [2024-10-17 10:23:25.514140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.557 [2024-10-17 10:23:25.514146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514151] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.557 [2024-10-17 10:23:25.514159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.557 [2024-10-17 10:23:25.514165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.557 [2024-10-17 10:23:25.514180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.557 [2024-10-17 10:23:25.514190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.557 [2024-10-17 10:23:25.514196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.557 [2024-10-17 10:23:25.514203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.557 [2024-10-17 10:23:25.514208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.557 [2024-10-17 10:23:25.514215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.557 [2024-10-17 10:23:25.514223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.557 [2024-10-17 10:23:25.514233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.557 [2024-10-17 10:23:25.514240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.557 [2024-10-17 10:23:25.514247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.557 [2024-10-17 10:23:25.514253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.557 [2024-10-17 10:23:25.514260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.558 [2024-10-17 10:23:25.514266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.558 [2024-10-17 10:23:25.514273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.558 [2024-10-17 10:23:25.514279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.558 [2024-10-17 10:23:25.514286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.558 [2024-10-17 10:23:25.514292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.558 [2024-10-17 10:23:25.514301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.558 [2024-10-17 10:23:25.514332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.558 [2024-10-17 10:23:25.514340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.558 [2024-10-17 10:23:25.514357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.558 [2024-10-17 10:23:25.514363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.558 [2024-10-17 10:23:25.514371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.558 [2024-10-17 10:23:25.514377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.558 [2024-10-17 10:23:25.514385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.558 [2024-10-17 10:23:25.514390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:24:22.558 [2024-10-17 10:23:25.514397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.558 [2024-10-17 10:23:25.514439] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:22.558 [2024-10-17 10:23:25.514453] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:25.082 [2024-10-17 10:23:27.831079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.831138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:25.082 [2024-10-17 10:23:27.831154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2316.625 ms 00:24:25.082 [2024-10-17 10:23:27.831164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.859480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.859527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.082 [2024-10-17 10:23:27.859541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.095 ms 00:24:25.082 [2024-10-17 10:23:27.859552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.859682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.859696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:25.082 [2024-10-17 10:23:27.859704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:25.082 [2024-10-17 10:23:27.859717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.892488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.892525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.082 [2024-10-17 10:23:27.892536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.737 ms 00:24:25.082 [2024-10-17 10:23:27.892545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.892572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.892583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.082 [2024-10-17 10:23:27.892592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:25.082 [2024-10-17 10:23:27.892604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.893087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.893112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.082 [2024-10-17 10:23:27.893122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:24:25.082 [2024-10-17 10:23:27.893133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.082 [2024-10-17 10:23:27.893238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.082 [2024-10-17 10:23:27.893249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.082 [2024-10-17 10:23:27.893259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:25.083 [2024-10-17 10:23:27.893271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:27.908626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:27.908657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.083 [2024-10-17 10:23:27.908668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.335 ms 00:24:25.083 [2024-10-17 10:23:27.908680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:27.920821] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:25.083 [2024-10-17 10:23:27.924110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:27.924138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:25.083 [2024-10-17 10:23:27.924152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.361 ms 00:24:25.083 [2024-10-17 10:23:27.924159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.000243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.000283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:25.083 [2024-10-17 10:23:28.000298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.055 ms 00:24:25.083 [2024-10-17 10:23:28.000307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.000493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.000506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:25.083 [2024-10-17 10:23:28.000518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:24:25.083 [2024-10-17 10:23:28.000529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.024294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.024325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:25.083 [2024-10-17 10:23:28.024339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.706 ms 00:24:25.083 [2024-10-17 10:23:28.024347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.046956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.046984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:25.083 [2024-10-17 10:23:28.046997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.572 ms 00:24:25.083 [2024-10-17 10:23:28.047005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.047573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.047593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:25.083 [2024-10-17 10:23:28.047604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:24:25.083 [2024-10-17 10:23:28.047612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.117968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.118000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:25.083 [2024-10-17 10:23:28.118016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.324 ms 00:24:25.083 [2024-10-17 10:23:28.118025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.143025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.143062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:25.083 [2024-10-17 10:23:28.143077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.938 ms 00:24:25.083 [2024-10-17 10:23:28.143085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.083 [2024-10-17 10:23:28.166263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.083 [2024-10-17 10:23:28.166294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:25.083 [2024-10-17 10:23:28.166305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.153 ms 00:24:25.083 [2024-10-17 10:23:28.166313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.340 [2024-10-17 10:23:28.190553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.341 [2024-10-17 10:23:28.190585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:25.341 [2024-10-17 10:23:28.190599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.215 ms 00:24:25.341 [2024-10-17 10:23:28.190607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.341 [2024-10-17 10:23:28.190634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.341 [2024-10-17 10:23:28.190642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:25.341 [2024-10-17 10:23:28.190654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:25.341 [2024-10-17 10:23:28.190663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.341 [2024-10-17 10:23:28.190740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.341 [2024-10-17 10:23:28.190750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:25.341 [2024-10-17 10:23:28.190760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:25.341 [2024-10-17 10:23:28.190768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.341 [2024-10-17 10:23:28.191713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2690.298 ms, result 0 00:24:25.341 { 00:24:25.341 "name": "ftl0", 00:24:25.341 "uuid": "420ae77d-e8d2-4328-83d9-23d3af78b30f" 00:24:25.341 } 00:24:25.341 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:25.341 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:25.341 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:25.341 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:25.341 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:25.599 /dev/nbd0 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:25.599 1+0 records in 00:24:25.599 1+0 records out 00:24:25.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560551 s, 7.3 MB/s 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:24:25.599 10:23:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:25.857 [2024-10-17 10:23:28.707642] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:24:25.857 [2024-10-17 10:23:28.707759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77804 ] 00:24:25.857 [2024-10-17 10:23:28.854257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.115 [2024-10-17 10:23:28.951379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.489  [2024-10-17T10:23:31.515Z] Copying: 196/1024 [MB] (196 MBps) [2024-10-17T10:23:32.468Z] Copying: 393/1024 [MB] (197 MBps) [2024-10-17T10:23:33.400Z] Copying: 596/1024 [MB] (202 MBps) [2024-10-17T10:23:33.965Z] Copying: 844/1024 [MB] (248 MBps) [2024-10-17T10:23:34.530Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:24:31.439 00:24:31.439 10:23:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:33.968 10:23:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:33.968 [2024-10-17 10:23:36.668720] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:24:33.968 [2024-10-17 10:23:36.668841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77891 ] 00:24:33.968 [2024-10-17 10:23:36.814593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.968 [2024-10-17 10:23:36.941174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.350  [2024-10-17T10:23:39.376Z] Copying: 21/1024 [MB] (21 MBps) [2024-10-17T10:23:40.309Z] Copying: 44/1024 [MB] (23 MBps) [2024-10-17T10:23:41.255Z] Copying: 76/1024 [MB] (32 MBps) [2024-10-17T10:23:42.189Z] Copying: 104/1024 [MB] (27 MBps) [2024-10-17T10:23:43.560Z] Copying: 132/1024 [MB] (27 MBps) [2024-10-17T10:23:44.494Z] Copying: 159/1024 [MB] (27 MBps) [2024-10-17T10:23:45.494Z] Copying: 190/1024 [MB] (30 MBps) [2024-10-17T10:23:46.428Z] Copying: 219/1024 [MB] (28 MBps) [2024-10-17T10:23:47.362Z] Copying: 248/1024 [MB] (29 MBps) [2024-10-17T10:23:48.296Z] Copying: 277/1024 [MB] (28 MBps) [2024-10-17T10:23:49.229Z] Copying: 308/1024 [MB] (30 MBps) [2024-10-17T10:23:50.612Z] Copying: 342/1024 [MB] (34 MBps) [2024-10-17T10:23:51.546Z] Copying: 365/1024 [MB] (23 MBps) [2024-10-17T10:23:52.483Z] Copying: 385/1024 [MB] (19 MBps) [2024-10-17T10:23:53.417Z] Copying: 403/1024 [MB] (17 MBps) [2024-10-17T10:23:54.351Z] Copying: 435/1024 [MB] (31 MBps) [2024-10-17T10:23:55.285Z] Copying: 464/1024 [MB] (28 MBps) [2024-10-17T10:23:56.218Z] Copying: 494/1024 [MB] (30 MBps) [2024-10-17T10:23:57.592Z] Copying: 522/1024 [MB] (28 MBps) [2024-10-17T10:23:58.192Z] Copying: 555/1024 [MB] (33 MBps) [2024-10-17T10:23:59.566Z] Copying: 571/1024 [MB] (15 MBps) [2024-10-17T10:24:00.499Z] Copying: 601/1024 [MB] (30 MBps) [2024-10-17T10:24:01.434Z] Copying: 618/1024 [MB] (16 MBps) [2024-10-17T10:24:02.365Z] Copying: 632/1024 [MB] (14 MBps) [2024-10-17T10:24:03.298Z] Copying: 653/1024 [MB] (20 MBps) [2024-10-17T10:24:04.231Z] Copying: 670/1024 [MB] (17 MBps) [2024-10-17T10:24:05.603Z] Copying: 690/1024 [MB] (19 MBps) [2024-10-17T10:24:06.538Z] Copying: 710/1024 [MB] (20 MBps) [2024-10-17T10:24:07.494Z] Copying: 729/1024 [MB] (18 MBps) [2024-10-17T10:24:08.427Z] Copying: 749/1024 [MB] (20 MBps) [2024-10-17T10:24:09.359Z] Copying: 768/1024 [MB] (19 MBps) [2024-10-17T10:24:10.293Z] Copying: 788/1024 [MB] (19 MBps) [2024-10-17T10:24:11.226Z] Copying: 805/1024 [MB] (17 MBps) [2024-10-17T10:24:12.599Z] Copying: 819/1024 [MB] (14 MBps) [2024-10-17T10:24:13.532Z] Copying: 838/1024 [MB] (18 MBps) [2024-10-17T10:24:14.466Z] Copying: 860/1024 [MB] (22 MBps) [2024-10-17T10:24:15.407Z] Copying: 872/1024 [MB] (12 MBps) [2024-10-17T10:24:16.341Z] Copying: 889/1024 [MB] (16 MBps) [2024-10-17T10:24:17.276Z] Copying: 906/1024 [MB] (17 MBps) [2024-10-17T10:24:18.209Z] Copying: 923/1024 [MB] (16 MBps) [2024-10-17T10:24:19.583Z] Copying: 936/1024 [MB] (13 MBps) [2024-10-17T10:24:20.516Z] Copying: 948/1024 [MB] (11 MBps) [2024-10-17T10:24:21.451Z] Copying: 959/1024 [MB] (10 MBps) [2024-10-17T10:24:22.385Z] Copying: 970/1024 [MB] (11 MBps) [2024-10-17T10:24:23.318Z] Copying: 982/1024 [MB] (11 MBps) [2024-10-17T10:24:24.280Z] Copying: 1000/1024 [MB] (17 MBps) [2024-10-17T10:24:25.213Z] Copying: 1011/1024 [MB] (11 MBps) [2024-10-17T10:24:25.471Z] Copying: 1023/1024 [MB] (11 MBps) [2024-10-17T10:24:26.039Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:25:22.948 00:25:22.948 10:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:22.948 10:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:23.210 10:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:23.210 [2024-10-17 10:24:26.280462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.210 [2024-10-17 10:24:26.280545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.210 [2024-10-17 10:24:26.280564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.210 [2024-10-17 10:24:26.280576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.210 [2024-10-17 10:24:26.280602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.210 [2024-10-17 10:24:26.283786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.210 [2024-10-17 10:24:26.283832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.210 [2024-10-17 10:24:26.283847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:25:23.210 [2024-10-17 10:24:26.283856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.210 [2024-10-17 10:24:26.287339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.210 [2024-10-17 10:24:26.287385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.210 [2024-10-17 10:24:26.287399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:25:23.210 [2024-10-17 10:24:26.287409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.471 [2024-10-17 10:24:26.308070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.471 [2024-10-17 10:24:26.308117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.471 [2024-10-17 10:24:26.308132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.635 ms 00:25:23.471 [2024-10-17 10:24:26.308145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.471 [2024-10-17 10:24:26.314379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.471 [2024-10-17 10:24:26.314437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:23.472 [2024-10-17 10:24:26.314452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:25:23.472 [2024-10-17 10:24:26.314461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.341403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.341452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.472 [2024-10-17 10:24:26.341468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.852 ms 00:25:23.472 [2024-10-17 10:24:26.341477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.360817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.360865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.472 [2024-10-17 10:24:26.360881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.280 ms 00:25:23.472 [2024-10-17 10:24:26.360890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.361084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.361102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.472 [2024-10-17 10:24:26.361116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:23.472 [2024-10-17 10:24:26.361125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.387508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.387556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:23.472 [2024-10-17 10:24:26.387572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.356 ms 00:25:23.472 [2024-10-17 10:24:26.387580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.412791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.412838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:23.472 [2024-10-17 10:24:26.412853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.154 ms 00:25:23.472 [2024-10-17 10:24:26.412861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.437338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.437385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.472 [2024-10-17 10:24:26.437400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.416 ms 00:25:23.472 [2024-10-17 10:24:26.437408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.462033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.472 [2024-10-17 10:24:26.462083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.472 [2024-10-17 10:24:26.462098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.478 ms 00:25:23.472 [2024-10-17 10:24:26.462106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.472 [2024-10-17 10:24:26.462158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.472 [2024-10-17 10:24:26.462175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.472 [2024-10-17 10:24:26.462831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.462996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.473 [2024-10-17 10:24:26.463195] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.473 [2024-10-17 10:24:26.463207] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 420ae77d-e8d2-4328-83d9-23d3af78b30f 00:25:23.473 [2024-10-17 10:24:26.463216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:23.473 [2024-10-17 10:24:26.463230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:23.473 [2024-10-17 10:24:26.463237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:23.473 [2024-10-17 10:24:26.463248] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:23.473 [2024-10-17 10:24:26.463257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.473 [2024-10-17 10:24:26.463267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.473 [2024-10-17 10:24:26.463279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.473 [2024-10-17 10:24:26.463290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.473 [2024-10-17 10:24:26.463297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.473 [2024-10-17 10:24:26.463325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.473 [2024-10-17 10:24:26.463333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.473 [2024-10-17 10:24:26.463344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:25:23.473 [2024-10-17 10:24:26.463354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.477893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.473 [2024-10-17 10:24:26.477934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.473 [2024-10-17 10:24:26.477949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:25:23.473 [2024-10-17 10:24:26.477958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.478429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.473 [2024-10-17 10:24:26.478457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.473 [2024-10-17 10:24:26.478470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:23.473 [2024-10-17 10:24:26.478478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.528151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.473 [2024-10-17 10:24:26.528197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.473 [2024-10-17 10:24:26.528212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.473 [2024-10-17 10:24:26.528224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.528300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.473 [2024-10-17 10:24:26.528310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.473 [2024-10-17 10:24:26.528321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.473 [2024-10-17 10:24:26.528330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.528415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.473 [2024-10-17 10:24:26.528429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.473 [2024-10-17 10:24:26.528441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.473 [2024-10-17 10:24:26.528451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.473 [2024-10-17 10:24:26.528492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.473 [2024-10-17 10:24:26.528502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.473 [2024-10-17 10:24:26.528514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.473 [2024-10-17 10:24:26.528522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.605899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.605948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.734 [2024-10-17 10:24:26.605963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.605973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.659962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.659999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.734 [2024-10-17 10:24:26.660011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.734 [2024-10-17 10:24:26.660134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.734 [2024-10-17 10:24:26.660202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.734 [2024-10-17 10:24:26.660309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.734 [2024-10-17 10:24:26.660363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.734 [2024-10-17 10:24:26.660423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.734 [2024-10-17 10:24:26.660494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.734 [2024-10-17 10:24:26.660502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.734 [2024-10-17 10:24:26.660508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.734 [2024-10-17 10:24:26.660635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.149 ms, result 0 00:25:23.734 true 00:25:23.734 10:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77673 00:25:23.734 10:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77673 00:25:23.734 10:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:23.734 [2024-10-17 10:24:26.744354] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:25:23.734 [2024-10-17 10:24:26.744486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78417 ] 00:25:23.993 [2024-10-17 10:24:26.891681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.993 [2024-10-17 10:24:26.987735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.367  [2024-10-17T10:24:29.392Z] Copying: 252/1024 [MB] (252 MBps) [2024-10-17T10:24:30.327Z] Copying: 506/1024 [MB] (254 MBps) [2024-10-17T10:24:31.262Z] Copying: 757/1024 [MB] (251 MBps) [2024-10-17T10:24:31.262Z] Copying: 1008/1024 [MB] (250 MBps) [2024-10-17T10:24:32.198Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:25:29.107 00:25:29.107 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77673 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:29.107 10:24:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:29.107 [2024-10-17 10:24:31.939780] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:25:29.107 [2024-10-17 10:24:31.939906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78471 ] 00:25:29.107 [2024-10-17 10:24:32.087965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.107 [2024-10-17 10:24:32.183445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.365 [2024-10-17 10:24:32.411393] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.365 [2024-10-17 10:24:32.411447] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.625 [2024-10-17 10:24:32.475016] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:29.625 [2024-10-17 10:24:32.475576] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:29.625 [2024-10-17 10:24:32.476534] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:30.199 [2024-10-17 10:24:33.133002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.133070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.199 [2024-10-17 10:24:33.133086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:30.199 [2024-10-17 10:24:33.133095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.133155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.133166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.199 [2024-10-17 10:24:33.133175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:30.199 [2024-10-17 10:24:33.133183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.133205] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.199 [2024-10-17 10:24:33.133892] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.199 [2024-10-17 10:24:33.133920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.133929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.199 [2024-10-17 10:24:33.133938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:25:30.199 [2024-10-17 10:24:33.133946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.135872] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.199 [2024-10-17 10:24:33.150136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.150184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.199 [2024-10-17 10:24:33.150204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:25:30.199 [2024-10-17 10:24:33.150213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.150290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.150301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.199 [2024-10-17 10:24:33.150310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:30.199 [2024-10-17 10:24:33.150318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.160550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.160590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.199 [2024-10-17 10:24:33.160602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.153 ms 00:25:30.199 [2024-10-17 10:24:33.160610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.160693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.160703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.199 [2024-10-17 10:24:33.160712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:30.199 [2024-10-17 10:24:33.160720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.160777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.160790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.199 [2024-10-17 10:24:33.160802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:30.199 [2024-10-17 10:24:33.160810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.160832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.199 [2024-10-17 10:24:33.165301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.165340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.199 [2024-10-17 10:24:33.165351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:25:30.199 [2024-10-17 10:24:33.165360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.165396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.165405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.199 [2024-10-17 10:24:33.165415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:30.199 [2024-10-17 10:24:33.165423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.165459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.199 [2024-10-17 10:24:33.165484] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:30.199 [2024-10-17 10:24:33.165528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.199 [2024-10-17 10:24:33.165548] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:30.199 [2024-10-17 10:24:33.165660] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.199 [2024-10-17 10:24:33.165672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.199 [2024-10-17 10:24:33.165683] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:30.199 [2024-10-17 10:24:33.165694] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.199 [2024-10-17 10:24:33.165704] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.199 [2024-10-17 10:24:33.165717] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:30.199 [2024-10-17 10:24:33.165725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.199 [2024-10-17 10:24:33.165733] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.199 [2024-10-17 10:24:33.165741] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.199 [2024-10-17 10:24:33.165749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.165758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.199 [2024-10-17 10:24:33.165769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:30.199 [2024-10-17 10:24:33.165777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.165861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.199 [2024-10-17 10:24:33.165871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.199 [2024-10-17 10:24:33.165879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:30.199 [2024-10-17 10:24:33.165889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.199 [2024-10-17 10:24:33.165996] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.199 [2024-10-17 10:24:33.166008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.199 [2024-10-17 10:24:33.166017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.199 [2024-10-17 10:24:33.166043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.199 [2024-10-17 10:24:33.166085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.199 [2024-10-17 10:24:33.166101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.199 [2024-10-17 10:24:33.166115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:30.199 [2024-10-17 10:24:33.166123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.199 [2024-10-17 10:24:33.166130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.199 [2024-10-17 10:24:33.166137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:30.199 [2024-10-17 10:24:33.166145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.199 [2024-10-17 10:24:33.166160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.199 [2024-10-17 10:24:33.166180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.199 [2024-10-17 10:24:33.166200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.199 [2024-10-17 10:24:33.166221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.199 [2024-10-17 10:24:33.166242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.199 [2024-10-17 10:24:33.166257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.199 [2024-10-17 10:24:33.166264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:30.199 [2024-10-17 10:24:33.166272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.199 [2024-10-17 10:24:33.166279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.199 [2024-10-17 10:24:33.166285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:30.199 [2024-10-17 10:24:33.166292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.200 [2024-10-17 10:24:33.166299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.200 [2024-10-17 10:24:33.166305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:30.200 [2024-10-17 10:24:33.166312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.200 [2024-10-17 10:24:33.166319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.200 [2024-10-17 10:24:33.166325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:30.200 [2024-10-17 10:24:33.166335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.200 [2024-10-17 10:24:33.166342] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.200 [2024-10-17 10:24:33.166350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.200 [2024-10-17 10:24:33.166357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.200 [2024-10-17 10:24:33.166365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.200 [2024-10-17 10:24:33.166376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.200 [2024-10-17 10:24:33.166384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.200 [2024-10-17 10:24:33.166391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.200 [2024-10-17 10:24:33.166399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.200 [2024-10-17 10:24:33.166405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.200 [2024-10-17 10:24:33.166412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.200 [2024-10-17 10:24:33.166420] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.200 [2024-10-17 10:24:33.166430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:30.200 [2024-10-17 10:24:33.166448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:30.200 [2024-10-17 10:24:33.166479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:30.200 [2024-10-17 10:24:33.166487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:30.200 [2024-10-17 10:24:33.166496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:30.200 [2024-10-17 10:24:33.166504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:30.200 [2024-10-17 10:24:33.166511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:30.200 [2024-10-17 10:24:33.166518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:30.200 [2024-10-17 10:24:33.166525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:30.200 [2024-10-17 10:24:33.166532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:30.200 [2024-10-17 10:24:33.166570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.200 [2024-10-17 10:24:33.166578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.200 [2024-10-17 10:24:33.166596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.200 [2024-10-17 10:24:33.166603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.200 [2024-10-17 10:24:33.166611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.200 [2024-10-17 10:24:33.166619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.166629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.200 [2024-10-17 10:24:33.166638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:25:30.200 [2024-10-17 10:24:33.166646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.203015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.203082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.200 [2024-10-17 10:24:33.203095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.322 ms 00:25:30.200 [2024-10-17 10:24:33.203104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.203205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.203216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.200 [2024-10-17 10:24:33.203229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:30.200 [2024-10-17 10:24:33.203237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.258366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.258421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.200 [2024-10-17 10:24:33.258435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.066 ms 00:25:30.200 [2024-10-17 10:24:33.258444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.258500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.258512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.200 [2024-10-17 10:24:33.258521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.200 [2024-10-17 10:24:33.258530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.259271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.259305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.200 [2024-10-17 10:24:33.259317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:25:30.200 [2024-10-17 10:24:33.259327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.259497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.259513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.200 [2024-10-17 10:24:33.259524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:30.200 [2024-10-17 10:24:33.259534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.200 [2024-10-17 10:24:33.277317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.200 [2024-10-17 10:24:33.277363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.200 [2024-10-17 10:24:33.277375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:25:30.200 [2024-10-17 10:24:33.277384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-10-17 10:24:33.292842] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:30.461 [2024-10-17 10:24:33.292891] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:30.461 [2024-10-17 10:24:33.292904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-10-17 10:24:33.292914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:30.461 [2024-10-17 10:24:33.292925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.399 ms 00:25:30.462 [2024-10-17 10:24:33.292934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.319223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.319274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:30.462 [2024-10-17 10:24:33.319300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.232 ms 00:25:30.462 [2024-10-17 10:24:33.319310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.332650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.332702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:30.462 [2024-10-17 10:24:33.332714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.282 ms 00:25:30.462 [2024-10-17 10:24:33.332723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.345759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.345808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:30.462 [2024-10-17 10:24:33.345820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.985 ms 00:25:30.462 [2024-10-17 10:24:33.345828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.346508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.346537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:30.462 [2024-10-17 10:24:33.346549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:25:30.462 [2024-10-17 10:24:33.346559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.419497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.419556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:30.462 [2024-10-17 10:24:33.419572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.916 ms 00:25:30.462 [2024-10-17 10:24:33.419582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.432034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:30.462 [2024-10-17 10:24:33.435714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.435759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:30.462 [2024-10-17 10:24:33.435773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.076 ms 00:25:30.462 [2024-10-17 10:24:33.435783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.435870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.435886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:30.462 [2024-10-17 10:24:33.435896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:30.462 [2024-10-17 10:24:33.435906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.435988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.436001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:30.462 [2024-10-17 10:24:33.436011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:30.462 [2024-10-17 10:24:33.436020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.436044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.436073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:30.462 [2024-10-17 10:24:33.436087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:30.462 [2024-10-17 10:24:33.436096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.436138] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:30.462 [2024-10-17 10:24:33.436149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.436161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:30.462 [2024-10-17 10:24:33.436172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:30.462 [2024-10-17 10:24:33.436182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.462435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.462496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.462 [2024-10-17 10:24:33.462510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:25:30.462 [2024-10-17 10:24:33.462519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.462614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.462 [2024-10-17 10:24:33.462626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.462 [2024-10-17 10:24:33.462636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:30.462 [2024-10-17 10:24:33.462646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.462 [2024-10-17 10:24:33.464315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.682 ms, result 0 00:25:31.406  [2024-10-17T10:24:35.882Z] Copying: 10196/1048576 [kB] (10196 kBps) [2024-10-17T10:24:36.817Z] Copying: 20036/1048576 [kB] (9840 kBps) [2024-10-17T10:24:37.756Z] Copying: 37/1024 [MB] (17 MBps) [2024-10-17T10:24:38.690Z] Copying: 49/1024 [MB] (12 MBps) [2024-10-17T10:24:39.624Z] Copying: 61/1024 [MB] (11 MBps) [2024-10-17T10:24:40.568Z] Copying: 74/1024 [MB] (13 MBps) [2024-10-17T10:24:41.515Z] Copying: 87/1024 [MB] (12 MBps) [2024-10-17T10:24:42.898Z] Copying: 98/1024 [MB] (11 MBps) [2024-10-17T10:24:43.833Z] Copying: 108/1024 [MB] (10 MBps) [2024-10-17T10:24:44.768Z] Copying: 123/1024 [MB] (14 MBps) [2024-10-17T10:24:45.708Z] Copying: 135/1024 [MB] (11 MBps) [2024-10-17T10:24:46.644Z] Copying: 152/1024 [MB] (17 MBps) [2024-10-17T10:24:47.577Z] Copying: 167/1024 [MB] (14 MBps) [2024-10-17T10:24:48.513Z] Copying: 179/1024 [MB] (12 MBps) [2024-10-17T10:24:49.491Z] Copying: 191/1024 [MB] (11 MBps) [2024-10-17T10:24:50.876Z] Copying: 204/1024 [MB] (13 MBps) [2024-10-17T10:24:51.812Z] Copying: 217/1024 [MB] (13 MBps) [2024-10-17T10:24:52.750Z] Copying: 228/1024 [MB] (10 MBps) [2024-10-17T10:24:53.685Z] Copying: 239/1024 [MB] (10 MBps) [2024-10-17T10:24:54.620Z] Copying: 253/1024 [MB] (14 MBps) [2024-10-17T10:24:55.555Z] Copying: 270/1024 [MB] (17 MBps) [2024-10-17T10:24:56.490Z] Copying: 282/1024 [MB] (12 MBps) [2024-10-17T10:24:57.876Z] Copying: 295/1024 [MB] (12 MBps) [2024-10-17T10:24:58.820Z] Copying: 309/1024 [MB] (14 MBps) [2024-10-17T10:24:59.756Z] Copying: 321/1024 [MB] (12 MBps) [2024-10-17T10:25:00.697Z] Copying: 337/1024 [MB] (15 MBps) [2024-10-17T10:25:01.633Z] Copying: 351/1024 [MB] (14 MBps) [2024-10-17T10:25:02.570Z] Copying: 368/1024 [MB] (17 MBps) [2024-10-17T10:25:03.503Z] Copying: 384/1024 [MB] (15 MBps) [2024-10-17T10:25:04.878Z] Copying: 401/1024 [MB] (17 MBps) [2024-10-17T10:25:05.812Z] Copying: 419/1024 [MB] (17 MBps) [2024-10-17T10:25:06.768Z] Copying: 432/1024 [MB] (12 MBps) [2024-10-17T10:25:07.708Z] Copying: 457/1024 [MB] (25 MBps) [2024-10-17T10:25:08.649Z] Copying: 475/1024 [MB] (18 MBps) [2024-10-17T10:25:09.590Z] Copying: 490/1024 [MB] (14 MBps) [2024-10-17T10:25:10.533Z] Copying: 508/1024 [MB] (17 MBps) [2024-10-17T10:25:11.478Z] Copying: 522/1024 [MB] (13 MBps) [2024-10-17T10:25:12.867Z] Copying: 533/1024 [MB] (10 MBps) [2024-10-17T10:25:13.806Z] Copying: 543/1024 [MB] (10 MBps) [2024-10-17T10:25:14.740Z] Copying: 556/1024 [MB] (12 MBps) [2024-10-17T10:25:15.743Z] Copying: 582/1024 [MB] (25 MBps) [2024-10-17T10:25:16.684Z] Copying: 598/1024 [MB] (15 MBps) [2024-10-17T10:25:17.623Z] Copying: 618/1024 [MB] (20 MBps) [2024-10-17T10:25:18.560Z] Copying: 629/1024 [MB] (10 MBps) [2024-10-17T10:25:19.498Z] Copying: 644/1024 [MB] (15 MBps) [2024-10-17T10:25:20.881Z] Copying: 659/1024 [MB] (15 MBps) [2024-10-17T10:25:21.823Z] Copying: 675/1024 [MB] (15 MBps) [2024-10-17T10:25:22.765Z] Copying: 685/1024 [MB] (10 MBps) [2024-10-17T10:25:23.707Z] Copying: 698/1024 [MB] (12 MBps) [2024-10-17T10:25:24.655Z] Copying: 713/1024 [MB] (15 MBps) [2024-10-17T10:25:25.598Z] Copying: 728/1024 [MB] (14 MBps) [2024-10-17T10:25:26.540Z] Copying: 743/1024 [MB] (14 MBps) [2024-10-17T10:25:27.483Z] Copying: 761/1024 [MB] (18 MBps) [2024-10-17T10:25:28.859Z] Copying: 775/1024 [MB] (13 MBps) [2024-10-17T10:25:29.794Z] Copying: 786/1024 [MB] (10 MBps) [2024-10-17T10:25:30.728Z] Copying: 798/1024 [MB] (12 MBps) [2024-10-17T10:25:31.661Z] Copying: 811/1024 [MB] (13 MBps) [2024-10-17T10:25:32.636Z] Copying: 830/1024 [MB] (19 MBps) [2024-10-17T10:25:33.580Z] Copying: 845/1024 [MB] (14 MBps) [2024-10-17T10:25:34.522Z] Copying: 859/1024 [MB] (14 MBps) [2024-10-17T10:25:35.904Z] Copying: 875/1024 [MB] (15 MBps) [2024-10-17T10:25:36.846Z] Copying: 891/1024 [MB] (16 MBps) [2024-10-17T10:25:37.799Z] Copying: 905/1024 [MB] (13 MBps) [2024-10-17T10:25:38.744Z] Copying: 921/1024 [MB] (15 MBps) [2024-10-17T10:25:39.688Z] Copying: 936/1024 [MB] (15 MBps) [2024-10-17T10:25:40.630Z] Copying: 951/1024 [MB] (14 MBps) [2024-10-17T10:25:41.574Z] Copying: 965/1024 [MB] (14 MBps) [2024-10-17T10:25:42.517Z] Copying: 983/1024 [MB] (18 MBps) [2024-10-17T10:25:43.901Z] Copying: 995/1024 [MB] (11 MBps) [2024-10-17T10:25:44.842Z] Copying: 1010/1024 [MB] (15 MBps) [2024-10-17T10:25:45.102Z] Copying: 1023/1024 [MB] (12 MBps) [2024-10-17T10:25:45.102Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-17 10:25:45.010782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.010878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.011 [2024-10-17 10:25:45.010897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.011 [2024-10-17 10:25:45.010908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.011 [2024-10-17 10:25:45.013020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.011 [2024-10-17 10:25:45.019577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.019628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.011 [2024-10-17 10:25:45.019642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:26:42.011 [2024-10-17 10:25:45.019651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.011 [2024-10-17 10:25:45.032102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.032163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.011 [2024-10-17 10:25:45.032176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.505 ms 00:26:42.011 [2024-10-17 10:25:45.032186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.011 [2024-10-17 10:25:45.056480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.056545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.011 [2024-10-17 10:25:45.056560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.274 ms 00:26:42.011 [2024-10-17 10:25:45.056569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.011 [2024-10-17 10:25:45.062727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.062775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:42.011 [2024-10-17 10:25:45.062806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:26:42.011 [2024-10-17 10:25:45.062823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.011 [2024-10-17 10:25:45.090824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.011 [2024-10-17 10:25:45.090875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.011 [2024-10-17 10:25:45.090891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.947 ms 00:26:42.011 [2024-10-17 10:25:45.090900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.269 [2024-10-17 10:25:45.108403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.269 [2024-10-17 10:25:45.108456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.269 [2024-10-17 10:25:45.108470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.452 ms 00:26:42.269 [2024-10-17 10:25:45.108480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.402228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.527 [2024-10-17 10:25:45.402266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.527 [2024-10-17 10:25:45.402275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 293.695 ms 00:26:42.527 [2024-10-17 10:25:45.402281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.420943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.527 [2024-10-17 10:25:45.420968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:42.527 [2024-10-17 10:25:45.420977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.646 ms 00:26:42.527 [2024-10-17 10:25:45.420982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.438976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.527 [2024-10-17 10:25:45.439002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:42.527 [2024-10-17 10:25:45.439010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.968 ms 00:26:42.527 [2024-10-17 10:25:45.439016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.456511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.527 [2024-10-17 10:25:45.456543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:42.527 [2024-10-17 10:25:45.456551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.470 ms 00:26:42.527 [2024-10-17 10:25:45.456557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.474031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.527 [2024-10-17 10:25:45.474062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:42.527 [2024-10-17 10:25:45.474070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.432 ms 00:26:42.527 [2024-10-17 10:25:45.474075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.527 [2024-10-17 10:25:45.474100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:42.527 [2024-10-17 10:25:45.474112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95488 / 261120 wr_cnt: 1 state: open 00:26:42.527 [2024-10-17 10:25:45.474120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:42.527 [2024-10-17 10:25:45.474423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:42.528 [2024-10-17 10:25:45.474741] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:42.528 [2024-10-17 10:25:45.474748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 420ae77d-e8d2-4328-83d9-23d3af78b30f 00:26:42.528 [2024-10-17 10:25:45.474755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95488 00:26:42.528 [2024-10-17 10:25:45.474761] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96448 00:26:42.528 [2024-10-17 10:25:45.474775] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95488 00:26:42.528 [2024-10-17 10:25:45.474783] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:26:42.528 [2024-10-17 10:25:45.474789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:42.528 [2024-10-17 10:25:45.474796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:42.528 [2024-10-17 10:25:45.474802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:42.528 [2024-10-17 10:25:45.474807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:42.528 [2024-10-17 10:25:45.474813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:42.528 [2024-10-17 10:25:45.474819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.528 [2024-10-17 10:25:45.474826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:42.528 [2024-10-17 10:25:45.474832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:26:42.528 [2024-10-17 10:25:45.474838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.484649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.528 [2024-10-17 10:25:45.484675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:42.528 [2024-10-17 10:25:45.484683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.797 ms 00:26:42.528 [2024-10-17 10:25:45.484689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.484977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.528 [2024-10-17 10:25:45.484989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:42.528 [2024-10-17 10:25:45.484996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:42.528 [2024-10-17 10:25:45.485002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.512333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.528 [2024-10-17 10:25:45.512361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.528 [2024-10-17 10:25:45.512369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.528 [2024-10-17 10:25:45.512376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.512416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.528 [2024-10-17 10:25:45.512424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.528 [2024-10-17 10:25:45.512431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.528 [2024-10-17 10:25:45.512437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.512495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.528 [2024-10-17 10:25:45.512504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.528 [2024-10-17 10:25:45.512510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.528 [2024-10-17 10:25:45.512516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.512528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.528 [2024-10-17 10:25:45.512534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.528 [2024-10-17 10:25:45.512541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.528 [2024-10-17 10:25:45.512546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.528 [2024-10-17 10:25:45.576101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.528 [2024-10-17 10:25:45.576137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.528 [2024-10-17 10:25:45.576147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.528 [2024-10-17 10:25:45.576153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.788 [2024-10-17 10:25:45.628221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.788 [2024-10-17 10:25:45.628300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.788 [2024-10-17 10:25:45.628369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.788 [2024-10-17 10:25:45.628472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:42.788 [2024-10-17 10:25:45.628515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.788 [2024-10-17 10:25:45.628573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.788 [2024-10-17 10:25:45.628627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.788 [2024-10-17 10:25:45.628635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.788 [2024-10-17 10:25:45.628641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.788 [2024-10-17 10:25:45.628748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 619.030 ms, result 0 00:26:44.165 00:26:44.165 00:26:44.165 10:25:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:46.078 10:25:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:46.339 [2024-10-17 10:25:49.212922] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:26:46.339 [2024-10-17 10:25:49.213096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79256 ] 00:26:46.339 [2024-10-17 10:25:49.369256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.616 [2024-10-17 10:25:49.516043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.890 [2024-10-17 10:25:49.839249] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.890 [2024-10-17 10:25:49.839339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:47.152 [2024-10-17 10:25:50.008191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.008262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.152 [2024-10-17 10:25:50.008279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.152 [2024-10-17 10:25:50.008294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.008349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.008360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.152 [2024-10-17 10:25:50.008370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:47.152 [2024-10-17 10:25:50.008382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.008403] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.152 [2024-10-17 10:25:50.009283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.152 [2024-10-17 10:25:50.009329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.009342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.152 [2024-10-17 10:25:50.009352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:26:47.152 [2024-10-17 10:25:50.009360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.011603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:47.152 [2024-10-17 10:25:50.027202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.027254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:47.152 [2024-10-17 10:25:50.027269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.601 ms 00:26:47.152 [2024-10-17 10:25:50.027278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.027356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.027367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:47.152 [2024-10-17 10:25:50.027380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:47.152 [2024-10-17 10:25:50.027388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.038710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.038753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.152 [2024-10-17 10:25:50.038765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.242 ms 00:26:47.152 [2024-10-17 10:25:50.038773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.038864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.038875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.152 [2024-10-17 10:25:50.038886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:47.152 [2024-10-17 10:25:50.038894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.038951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.038964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.152 [2024-10-17 10:25:50.038974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:47.152 [2024-10-17 10:25:50.038984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.039008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.152 [2024-10-17 10:25:50.043718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.043760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.152 [2024-10-17 10:25:50.043796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:26:47.152 [2024-10-17 10:25:50.043806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.043850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.043860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.152 [2024-10-17 10:25:50.043869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:47.152 [2024-10-17 10:25:50.043879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.043919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:47.152 [2024-10-17 10:25:50.043945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:47.152 [2024-10-17 10:25:50.043988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:47.152 [2024-10-17 10:25:50.044009] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:47.152 [2024-10-17 10:25:50.044140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.152 [2024-10-17 10:25:50.044155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.152 [2024-10-17 10:25:50.044170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:47.152 [2024-10-17 10:25:50.044181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044191] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.152 [2024-10-17 10:25:50.044210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.152 [2024-10-17 10:25:50.044218] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.152 [2024-10-17 10:25:50.044228] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.152 [2024-10-17 10:25:50.044237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.044251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.152 [2024-10-17 10:25:50.044261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:26:47.152 [2024-10-17 10:25:50.044268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.044353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.152 [2024-10-17 10:25:50.044363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.152 [2024-10-17 10:25:50.044372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:47.152 [2024-10-17 10:25:50.044382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.152 [2024-10-17 10:25:50.044492] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.152 [2024-10-17 10:25:50.044515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.152 [2024-10-17 10:25:50.044530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.152 [2024-10-17 10:25:50.044557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.152 [2024-10-17 10:25:50.044582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.152 [2024-10-17 10:25:50.044601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.152 [2024-10-17 10:25:50.044609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.152 [2024-10-17 10:25:50.044620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.152 [2024-10-17 10:25:50.044628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.152 [2024-10-17 10:25:50.044635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.152 [2024-10-17 10:25:50.044651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.152 [2024-10-17 10:25:50.044667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.152 [2024-10-17 10:25:50.044688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.152 [2024-10-17 10:25:50.044708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.152 [2024-10-17 10:25:50.044733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.152 [2024-10-17 10:25:50.044752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.152 [2024-10-17 10:25:50.044765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.152 [2024-10-17 10:25:50.044772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.152 [2024-10-17 10:25:50.044778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.152 [2024-10-17 10:25:50.044785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.153 [2024-10-17 10:25:50.044794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.153 [2024-10-17 10:25:50.044801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.153 [2024-10-17 10:25:50.044807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.153 [2024-10-17 10:25:50.044814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.153 [2024-10-17 10:25:50.044820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.153 [2024-10-17 10:25:50.044827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.153 [2024-10-17 10:25:50.044833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.153 [2024-10-17 10:25:50.044839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.153 [2024-10-17 10:25:50.044846] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.153 [2024-10-17 10:25:50.044857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.153 [2024-10-17 10:25:50.044866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.153 [2024-10-17 10:25:50.044876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.153 [2024-10-17 10:25:50.044884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.153 [2024-10-17 10:25:50.044891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.153 [2024-10-17 10:25:50.044898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.153 [2024-10-17 10:25:50.044905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.153 [2024-10-17 10:25:50.044913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.153 [2024-10-17 10:25:50.044920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.153 [2024-10-17 10:25:50.044928] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.153 [2024-10-17 10:25:50.044938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.044947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.153 [2024-10-17 10:25:50.044954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.153 [2024-10-17 10:25:50.044961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.153 [2024-10-17 10:25:50.044970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.153 [2024-10-17 10:25:50.044976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.153 [2024-10-17 10:25:50.044985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.153 [2024-10-17 10:25:50.044993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.153 [2024-10-17 10:25:50.044999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.153 [2024-10-17 10:25:50.045006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.153 [2024-10-17 10:25:50.045013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.153 [2024-10-17 10:25:50.045076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.153 [2024-10-17 10:25:50.045086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.153 [2024-10-17 10:25:50.045105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.153 [2024-10-17 10:25:50.045115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.153 [2024-10-17 10:25:50.045125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.153 [2024-10-17 10:25:50.045133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.045144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.153 [2024-10-17 10:25:50.045153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:26:47.153 [2024-10-17 10:25:50.045161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.083211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.083259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.153 [2024-10-17 10:25:50.083273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.001 ms 00:26:47.153 [2024-10-17 10:25:50.083281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.083378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.083393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:47.153 [2024-10-17 10:25:50.083402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:47.153 [2024-10-17 10:25:50.083411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.133000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.133065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.153 [2024-10-17 10:25:50.133080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.524 ms 00:26:47.153 [2024-10-17 10:25:50.133090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.133142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.133153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.153 [2024-10-17 10:25:50.133163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.153 [2024-10-17 10:25:50.133172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.133908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.133952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.153 [2024-10-17 10:25:50.133964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:26:47.153 [2024-10-17 10:25:50.133973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.134169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.134186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.153 [2024-10-17 10:25:50.134195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:26:47.153 [2024-10-17 10:25:50.134205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.152279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.152322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.153 [2024-10-17 10:25:50.152335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.047 ms 00:26:47.153 [2024-10-17 10:25:50.152348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.167771] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:47.153 [2024-10-17 10:25:50.167819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:47.153 [2024-10-17 10:25:50.167833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.167843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:47.153 [2024-10-17 10:25:50.167854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.367 ms 00:26:47.153 [2024-10-17 10:25:50.167862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.193807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.193859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:47.153 [2024-10-17 10:25:50.193879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.888 ms 00:26:47.153 [2024-10-17 10:25:50.193889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.206735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.206790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:47.153 [2024-10-17 10:25:50.206803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:26:47.153 [2024-10-17 10:25:50.206812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.219472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.219519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:47.153 [2024-10-17 10:25:50.219531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:26:47.153 [2024-10-17 10:25:50.219541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.153 [2024-10-17 10:25:50.220247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.153 [2024-10-17 10:25:50.220278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:47.153 [2024-10-17 10:25:50.220291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:26:47.153 [2024-10-17 10:25:50.220300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.291759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.291825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:47.415 [2024-10-17 10:25:50.291841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.432 ms 00:26:47.415 [2024-10-17 10:25:50.291859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.303116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:47.415 [2024-10-17 10:25:50.306724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.306767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:47.415 [2024-10-17 10:25:50.306781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.808 ms 00:26:47.415 [2024-10-17 10:25:50.306791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.306879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.306892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:47.415 [2024-10-17 10:25:50.306902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:47.415 [2024-10-17 10:25:50.306911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.308982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.309029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:47.415 [2024-10-17 10:25:50.309040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:26:47.415 [2024-10-17 10:25:50.309065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.309098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.309108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:47.415 [2024-10-17 10:25:50.309119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:47.415 [2024-10-17 10:25:50.309127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.309174] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:47.415 [2024-10-17 10:25:50.309187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.309200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:47.415 [2024-10-17 10:25:50.309212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:47.415 [2024-10-17 10:25:50.309221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.335516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.335566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:47.415 [2024-10-17 10:25:50.335581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.273 ms 00:26:47.415 [2024-10-17 10:25:50.335590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.335689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.415 [2024-10-17 10:25:50.335700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:47.415 [2024-10-17 10:25:50.335711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:47.415 [2024-10-17 10:25:50.335720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.415 [2024-10-17 10:25:50.337250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.454 ms, result 0 00:26:48.802  [2024-10-17T10:25:52.833Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-10-17T10:25:53.776Z] Copying: 4376/1048576 [kB] (3368 kBps) [2024-10-17T10:25:54.713Z] Copying: 12732/1048576 [kB] (8356 kBps) [2024-10-17T10:25:55.656Z] Copying: 26/1024 [MB] (14 MBps) [2024-10-17T10:25:56.600Z] Copying: 41/1024 [MB] (14 MBps) [2024-10-17T10:25:57.536Z] Copying: 56/1024 [MB] (14 MBps) [2024-10-17T10:25:58.950Z] Copying: 71/1024 [MB] (15 MBps) [2024-10-17T10:25:59.892Z] Copying: 86/1024 [MB] (15 MBps) [2024-10-17T10:26:00.833Z] Copying: 101/1024 [MB] (14 MBps) [2024-10-17T10:26:01.768Z] Copying: 115/1024 [MB] (14 MBps) [2024-10-17T10:26:02.701Z] Copying: 137/1024 [MB] (21 MBps) [2024-10-17T10:26:03.638Z] Copying: 152/1024 [MB] (15 MBps) [2024-10-17T10:26:04.582Z] Copying: 169/1024 [MB] (16 MBps) [2024-10-17T10:26:05.956Z] Copying: 184/1024 [MB] (15 MBps) [2024-10-17T10:26:06.905Z] Copying: 200/1024 [MB] (15 MBps) [2024-10-17T10:26:07.836Z] Copying: 222/1024 [MB] (21 MBps) [2024-10-17T10:26:08.771Z] Copying: 238/1024 [MB] (16 MBps) [2024-10-17T10:26:09.705Z] Copying: 254/1024 [MB] (15 MBps) [2024-10-17T10:26:10.639Z] Copying: 270/1024 [MB] (15 MBps) [2024-10-17T10:26:11.578Z] Copying: 292/1024 [MB] (22 MBps) [2024-10-17T10:26:12.965Z] Copying: 308/1024 [MB] (16 MBps) [2024-10-17T10:26:13.534Z] Copying: 324/1024 [MB] (15 MBps) [2024-10-17T10:26:14.919Z] Copying: 341/1024 [MB] (16 MBps) [2024-10-17T10:26:15.884Z] Copying: 357/1024 [MB] (16 MBps) [2024-10-17T10:26:16.822Z] Copying: 374/1024 [MB] (17 MBps) [2024-10-17T10:26:17.756Z] Copying: 390/1024 [MB] (15 MBps) [2024-10-17T10:26:18.691Z] Copying: 406/1024 [MB] (16 MBps) [2024-10-17T10:26:19.630Z] Copying: 423/1024 [MB] (16 MBps) [2024-10-17T10:26:20.568Z] Copying: 437/1024 [MB] (14 MBps) [2024-10-17T10:26:21.940Z] Copying: 452/1024 [MB] (14 MBps) [2024-10-17T10:26:22.873Z] Copying: 468/1024 [MB] (16 MBps) [2024-10-17T10:26:23.811Z] Copying: 485/1024 [MB] (16 MBps) [2024-10-17T10:26:24.763Z] Copying: 501/1024 [MB] (16 MBps) [2024-10-17T10:26:25.701Z] Copying: 515/1024 [MB] (14 MBps) [2024-10-17T10:26:26.641Z] Copying: 533/1024 [MB] (17 MBps) [2024-10-17T10:26:27.579Z] Copying: 548/1024 [MB] (15 MBps) [2024-10-17T10:26:28.960Z] Copying: 565/1024 [MB] (16 MBps) [2024-10-17T10:26:29.532Z] Copying: 581/1024 [MB] (15 MBps) [2024-10-17T10:26:30.915Z] Copying: 596/1024 [MB] (14 MBps) [2024-10-17T10:26:31.850Z] Copying: 610/1024 [MB] (14 MBps) [2024-10-17T10:26:32.797Z] Copying: 625/1024 [MB] (15 MBps) [2024-10-17T10:26:33.767Z] Copying: 642/1024 [MB] (17 MBps) [2024-10-17T10:26:34.702Z] Copying: 658/1024 [MB] (16 MBps) [2024-10-17T10:26:35.641Z] Copying: 675/1024 [MB] (16 MBps) [2024-10-17T10:26:36.581Z] Copying: 692/1024 [MB] (16 MBps) [2024-10-17T10:26:37.952Z] Copying: 713/1024 [MB] (21 MBps) [2024-10-17T10:26:38.885Z] Copying: 730/1024 [MB] (16 MBps) [2024-10-17T10:26:39.828Z] Copying: 747/1024 [MB] (17 MBps) [2024-10-17T10:26:40.773Z] Copying: 773/1024 [MB] (26 MBps) [2024-10-17T10:26:41.715Z] Copying: 794/1024 [MB] (20 MBps) [2024-10-17T10:26:42.654Z] Copying: 808/1024 [MB] (14 MBps) [2024-10-17T10:26:43.586Z] Copying: 824/1024 [MB] (16 MBps) [2024-10-17T10:26:44.958Z] Copying: 845/1024 [MB] (20 MBps) [2024-10-17T10:26:45.893Z] Copying: 863/1024 [MB] (17 MBps) [2024-10-17T10:26:46.833Z] Copying: 882/1024 [MB] (19 MBps) [2024-10-17T10:26:47.766Z] Copying: 902/1024 [MB] (19 MBps) [2024-10-17T10:26:48.700Z] Copying: 918/1024 [MB] (15 MBps) [2024-10-17T10:26:49.640Z] Copying: 935/1024 [MB] (17 MBps) [2024-10-17T10:26:50.590Z] Copying: 955/1024 [MB] (20 MBps) [2024-10-17T10:26:51.526Z] Copying: 970/1024 [MB] (14 MBps) [2024-10-17T10:26:52.901Z] Copying: 986/1024 [MB] (15 MBps) [2024-10-17T10:26:53.835Z] Copying: 1002/1024 [MB] (16 MBps) [2024-10-17T10:26:53.835Z] Copying: 1019/1024 [MB] (16 MBps) [2024-10-17T10:26:53.835Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-17 10:26:53.818448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.744 [2024-10-17 10:26:53.818512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:50.744 [2024-10-17 10:26:53.818529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:50.744 [2024-10-17 10:26:53.818546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.744 [2024-10-17 10:26:53.818569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:50.744 [2024-10-17 10:26:53.821799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.744 [2024-10-17 10:26:53.821834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:50.744 [2024-10-17 10:26:53.821846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:27:50.744 [2024-10-17 10:26:53.821856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.744 [2024-10-17 10:26:53.822113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.744 [2024-10-17 10:26:53.822126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:50.744 [2024-10-17 10:26:53.822136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:27:50.744 [2024-10-17 10:26:53.822150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.835343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.835377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:51.005 [2024-10-17 10:26:53.835389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.175 ms 00:27:51.005 [2024-10-17 10:26:53.835397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.841131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.841154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:51.005 [2024-10-17 10:26:53.841162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.707 ms 00:27:51.005 [2024-10-17 10:26:53.841168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.861143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.861171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:51.005 [2024-10-17 10:26:53.861180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.930 ms 00:27:51.005 [2024-10-17 10:26:53.861186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.873380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.873406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:51.005 [2024-10-17 10:26:53.873415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:27:51.005 [2024-10-17 10:26:53.873422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.877071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.877097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:51.005 [2024-10-17 10:26:53.877105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:27:51.005 [2024-10-17 10:26:53.877111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.896014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.896039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:51.005 [2024-10-17 10:26:53.896052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.892 ms 00:27:51.005 [2024-10-17 10:26:53.896058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.914440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.914464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:51.005 [2024-10-17 10:26:53.914480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.357 ms 00:27:51.005 [2024-10-17 10:26:53.914485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.932256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.932280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:51.005 [2024-10-17 10:26:53.932288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.746 ms 00:27:51.005 [2024-10-17 10:26:53.932293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.949964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.005 [2024-10-17 10:26:53.949990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:51.005 [2024-10-17 10:26:53.949997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.629 ms 00:27:51.005 [2024-10-17 10:26:53.950003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.005 [2024-10-17 10:26:53.950028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:51.005 [2024-10-17 10:26:53.950039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:51.005 [2024-10-17 10:26:53.950059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:51.005 [2024-10-17 10:26:53.950066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:51.005 [2024-10-17 10:26:53.950367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:51.006 [2024-10-17 10:26:53.950654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:51.006 [2024-10-17 10:26:53.950661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 420ae77d-e8d2-4328-83d9-23d3af78b30f 00:27:51.006 [2024-10-17 10:26:53.950667] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:51.006 [2024-10-17 10:26:53.950673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 169152 00:27:51.006 [2024-10-17 10:26:53.950679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 167168 00:27:51.006 [2024-10-17 10:26:53.950685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:27:51.006 [2024-10-17 10:26:53.950691] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:51.006 [2024-10-17 10:26:53.950700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:51.006 [2024-10-17 10:26:53.950706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:51.006 [2024-10-17 10:26:53.950717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:51.006 [2024-10-17 10:26:53.950722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:51.006 [2024-10-17 10:26:53.950728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.006 [2024-10-17 10:26:53.950734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:51.006 [2024-10-17 10:26:53.950740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:27:51.006 [2024-10-17 10:26:53.950746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.960921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.006 [2024-10-17 10:26:53.960946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:51.006 [2024-10-17 10:26:53.960957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:27:51.006 [2024-10-17 10:26:53.960964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.961267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.006 [2024-10-17 10:26:53.961280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:51.006 [2024-10-17 10:26:53.961287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:51.006 [2024-10-17 10:26:53.961293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.989387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.006 [2024-10-17 10:26:53.989415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.006 [2024-10-17 10:26:53.989423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.006 [2024-10-17 10:26:53.989429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.989472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.006 [2024-10-17 10:26:53.989480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.006 [2024-10-17 10:26:53.989486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.006 [2024-10-17 10:26:53.989492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.989534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.006 [2024-10-17 10:26:53.989542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.006 [2024-10-17 10:26:53.989551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.006 [2024-10-17 10:26:53.989558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:53.989569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.006 [2024-10-17 10:26:53.989576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.006 [2024-10-17 10:26:53.989583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.006 [2024-10-17 10:26:53.989588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.006 [2024-10-17 10:26:54.053759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.006 [2024-10-17 10:26:54.053794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.006 [2024-10-17 10:26:54.053804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.006 [2024-10-17 10:26:54.053811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.265 [2024-10-17 10:26:54.105544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.265 [2024-10-17 10:26:54.105581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.265 [2024-10-17 10:26:54.105591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.265 [2024-10-17 10:26:54.105598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.265 [2024-10-17 10:26:54.105670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.265 [2024-10-17 10:26:54.105679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.265 [2024-10-17 10:26:54.105686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.265 [2024-10-17 10:26:54.105695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.265 [2024-10-17 10:26:54.105724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.265 [2024-10-17 10:26:54.105732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.265 [2024-10-17 10:26:54.105738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.265 [2024-10-17 10:26:54.105745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.266 [2024-10-17 10:26:54.105821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.266 [2024-10-17 10:26:54.105830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.266 [2024-10-17 10:26:54.105836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.266 [2024-10-17 10:26:54.105843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.266 [2024-10-17 10:26:54.105870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.266 [2024-10-17 10:26:54.105878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:51.266 [2024-10-17 10:26:54.105885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.266 [2024-10-17 10:26:54.105892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.266 [2024-10-17 10:26:54.105928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.266 [2024-10-17 10:26:54.105935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.266 [2024-10-17 10:26:54.105942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.266 [2024-10-17 10:26:54.105948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.266 [2024-10-17 10:26:54.105989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:51.266 [2024-10-17 10:26:54.105998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:51.266 [2024-10-17 10:26:54.106005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:51.266 [2024-10-17 10:26:54.106011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.266 [2024-10-17 10:26:54.106129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 287.660 ms, result 0 00:27:51.833 00:27:51.833 00:27:51.833 10:26:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:54.374 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:54.374 10:26:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:54.374 [2024-10-17 10:26:57.012289] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:27:54.374 [2024-10-17 10:26:57.012438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79944 ] 00:27:54.374 [2024-10-17 10:26:57.169122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.374 [2024-10-17 10:26:57.321022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.633 [2024-10-17 10:26:57.654574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:54.633 [2024-10-17 10:26:57.654662] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:54.894 [2024-10-17 10:26:57.821844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.821909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:54.894 [2024-10-17 10:26:57.821927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:54.894 [2024-10-17 10:26:57.821943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.822005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.822017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.894 [2024-10-17 10:26:57.822027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:54.894 [2024-10-17 10:26:57.822039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.822083] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:54.894 [2024-10-17 10:26:57.822810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:54.894 [2024-10-17 10:26:57.822832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.822846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.894 [2024-10-17 10:26:57.822856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:27:54.894 [2024-10-17 10:26:57.822865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.825212] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:54.894 [2024-10-17 10:26:57.840835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.840897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:54.894 [2024-10-17 10:26:57.840913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:27:54.894 [2024-10-17 10:26:57.840923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.841017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.841028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:54.894 [2024-10-17 10:26:57.841042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:54.894 [2024-10-17 10:26:57.841069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.852934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.852974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.894 [2024-10-17 10:26:57.852987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.778 ms 00:27:54.894 [2024-10-17 10:26:57.852996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.853113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.853125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.894 [2024-10-17 10:26:57.853135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:54.894 [2024-10-17 10:26:57.853144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.853208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.853220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:54.894 [2024-10-17 10:26:57.853229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:54.894 [2024-10-17 10:26:57.853238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.853263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:54.894 [2024-10-17 10:26:57.858056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.858091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.894 [2024-10-17 10:26:57.858102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:27:54.894 [2024-10-17 10:26:57.858111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.858153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.858163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:54.894 [2024-10-17 10:26:57.858173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:54.894 [2024-10-17 10:26:57.858182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.858221] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:54.894 [2024-10-17 10:26:57.858249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:54.894 [2024-10-17 10:26:57.858291] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:54.894 [2024-10-17 10:26:57.858311] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:54.894 [2024-10-17 10:26:57.858426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:54.894 [2024-10-17 10:26:57.858439] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:54.894 [2024-10-17 10:26:57.858451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:54.894 [2024-10-17 10:26:57.858463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858488] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:54.894 [2024-10-17 10:26:57.858497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:54.894 [2024-10-17 10:26:57.858505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:54.894 [2024-10-17 10:26:57.858515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:54.894 [2024-10-17 10:26:57.858524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.858536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:54.894 [2024-10-17 10:26:57.858545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:54.894 [2024-10-17 10:26:57.858554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.858643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.894 [2024-10-17 10:26:57.858663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:54.894 [2024-10-17 10:26:57.858673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:54.894 [2024-10-17 10:26:57.858682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.894 [2024-10-17 10:26:57.858792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:54.894 [2024-10-17 10:26:57.858806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:54.894 [2024-10-17 10:26:57.858818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:54.894 [2024-10-17 10:26:57.858846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:54.894 [2024-10-17 10:26:57.858872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.894 [2024-10-17 10:26:57.858887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:54.894 [2024-10-17 10:26:57.858895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:54.894 [2024-10-17 10:26:57.858902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:54.894 [2024-10-17 10:26:57.858909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:54.894 [2024-10-17 10:26:57.858918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:54.894 [2024-10-17 10:26:57.858932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:54.894 [2024-10-17 10:26:57.858947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:54.894 [2024-10-17 10:26:57.858968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.894 [2024-10-17 10:26:57.858982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:54.894 [2024-10-17 10:26:57.858990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:54.894 [2024-10-17 10:26:57.858998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.894 [2024-10-17 10:26:57.859004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:54.894 [2024-10-17 10:26:57.859011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:54.894 [2024-10-17 10:26:57.859018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.894 [2024-10-17 10:26:57.859025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:54.894 [2024-10-17 10:26:57.859031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:54.894 [2024-10-17 10:26:57.859038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:54.894 [2024-10-17 10:26:57.859046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:54.895 [2024-10-17 10:26:57.859077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:54.895 [2024-10-17 10:26:57.859085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.895 [2024-10-17 10:26:57.859092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:54.895 [2024-10-17 10:26:57.859099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:54.895 [2024-10-17 10:26:57.859106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:54.895 [2024-10-17 10:26:57.859113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:54.895 [2024-10-17 10:26:57.859122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:54.895 [2024-10-17 10:26:57.859130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.895 [2024-10-17 10:26:57.859140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:54.895 [2024-10-17 10:26:57.859148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:54.895 [2024-10-17 10:26:57.859155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.895 [2024-10-17 10:26:57.859162] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:54.895 [2024-10-17 10:26:57.859171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:54.895 [2024-10-17 10:26:57.859179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:54.895 [2024-10-17 10:26:57.859188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:54.895 [2024-10-17 10:26:57.859210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:54.895 [2024-10-17 10:26:57.859217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:54.895 [2024-10-17 10:26:57.859225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:54.895 [2024-10-17 10:26:57.859232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:54.895 [2024-10-17 10:26:57.859240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:54.895 [2024-10-17 10:26:57.859247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:54.895 [2024-10-17 10:26:57.859257] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:54.895 [2024-10-17 10:26:57.859269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:54.895 [2024-10-17 10:26:57.859286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:54.895 [2024-10-17 10:26:57.859293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:54.895 [2024-10-17 10:26:57.859301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:54.895 [2024-10-17 10:26:57.859309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:54.895 [2024-10-17 10:26:57.859317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:54.895 [2024-10-17 10:26:57.859325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:54.895 [2024-10-17 10:26:57.859333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:54.895 [2024-10-17 10:26:57.859340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:54.895 [2024-10-17 10:26:57.859349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:54.895 [2024-10-17 10:26:57.859389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:54.895 [2024-10-17 10:26:57.859398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:54.895 [2024-10-17 10:26:57.859427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:54.895 [2024-10-17 10:26:57.859435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:54.895 [2024-10-17 10:26:57.859444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:54.895 [2024-10-17 10:26:57.859452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.859462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:54.895 [2024-10-17 10:26:57.859470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:27:54.895 [2024-10-17 10:26:57.859478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.898218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.898265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.895 [2024-10-17 10:26:57.898279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.687 ms 00:27:54.895 [2024-10-17 10:26:57.898288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.898390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.898406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:54.895 [2024-10-17 10:26:57.898415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:54.895 [2024-10-17 10:26:57.898424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.949006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.949068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.895 [2024-10-17 10:26:57.949083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.517 ms 00:27:54.895 [2024-10-17 10:26:57.949093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.949150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.949161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:54.895 [2024-10-17 10:26:57.949171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:54.895 [2024-10-17 10:26:57.949179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.949947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.949975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:54.895 [2024-10-17 10:26:57.949988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:27:54.895 [2024-10-17 10:26:57.950000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.950222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.950237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:54.895 [2024-10-17 10:26:57.950248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:27:54.895 [2024-10-17 10:26:57.950257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.895 [2024-10-17 10:26:57.968695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.895 [2024-10-17 10:26:57.968739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:54.895 [2024-10-17 10:26:57.968753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.410 ms 00:27:54.895 [2024-10-17 10:26:57.968766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:57.984377] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:55.157 [2024-10-17 10:26:57.984424] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:55.157 [2024-10-17 10:26:57.984439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:57.984448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:55.157 [2024-10-17 10:26:57.984460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.542 ms 00:27:55.157 [2024-10-17 10:26:57.984467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.011385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.011432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:55.157 [2024-10-17 10:26:58.011454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:27:55.157 [2024-10-17 10:26:58.011463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.024835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.024881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:55.157 [2024-10-17 10:26:58.024894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.310 ms 00:27:55.157 [2024-10-17 10:26:58.024902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.038126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.038168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:55.157 [2024-10-17 10:26:58.038181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:27:55.157 [2024-10-17 10:26:58.038189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.038855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.038893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:55.157 [2024-10-17 10:26:58.038904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:27:55.157 [2024-10-17 10:26:58.038913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.113887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.113948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:55.157 [2024-10-17 10:26:58.113967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.950 ms 00:27:55.157 [2024-10-17 10:26:58.113987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.126467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:55.157 [2024-10-17 10:26:58.130245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.130285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:55.157 [2024-10-17 10:26:58.130300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.161 ms 00:27:55.157 [2024-10-17 10:26:58.130310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.130406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.130419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:55.157 [2024-10-17 10:26:58.130430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:55.157 [2024-10-17 10:26:58.130439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.131623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.131670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:55.157 [2024-10-17 10:26:58.131683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:27:55.157 [2024-10-17 10:26:58.131692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.131726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.131736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:55.157 [2024-10-17 10:26:58.131746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:55.157 [2024-10-17 10:26:58.131755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.131808] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:55.157 [2024-10-17 10:26:58.131821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.131835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:55.157 [2024-10-17 10:26:58.131845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:55.157 [2024-10-17 10:26:58.131853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.158985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.159032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:55.157 [2024-10-17 10:26:58.159055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.111 ms 00:27:55.157 [2024-10-17 10:26:58.159065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.159175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.157 [2024-10-17 10:26:58.159187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:55.157 [2024-10-17 10:26:58.159210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:55.157 [2024-10-17 10:26:58.159220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.157 [2024-10-17 10:26:58.160940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.509 ms, result 0 00:27:56.594  [2024-10-17T10:27:00.624Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-17T10:27:01.564Z] Copying: 25/1024 [MB] (15 MBps) [2024-10-17T10:27:02.496Z] Copying: 40/1024 [MB] (14 MBps) [2024-10-17T10:27:03.440Z] Copying: 52/1024 [MB] (12 MBps) [2024-10-17T10:27:04.381Z] Copying: 67/1024 [MB] (14 MBps) [2024-10-17T10:27:05.763Z] Copying: 86/1024 [MB] (19 MBps) [2024-10-17T10:27:06.696Z] Copying: 96/1024 [MB] (10 MBps) [2024-10-17T10:27:07.638Z] Copying: 109/1024 [MB] (12 MBps) [2024-10-17T10:27:08.577Z] Copying: 121/1024 [MB] (12 MBps) [2024-10-17T10:27:09.520Z] Copying: 140/1024 [MB] (19 MBps) [2024-10-17T10:27:10.461Z] Copying: 155/1024 [MB] (15 MBps) [2024-10-17T10:27:11.401Z] Copying: 173/1024 [MB] (17 MBps) [2024-10-17T10:27:12.785Z] Copying: 192/1024 [MB] (18 MBps) [2024-10-17T10:27:13.354Z] Copying: 212/1024 [MB] (20 MBps) [2024-10-17T10:27:14.735Z] Copying: 227/1024 [MB] (14 MBps) [2024-10-17T10:27:15.686Z] Copying: 249/1024 [MB] (21 MBps) [2024-10-17T10:27:16.673Z] Copying: 265/1024 [MB] (16 MBps) [2024-10-17T10:27:17.617Z] Copying: 282036/1048576 [kB] (9936 kBps) [2024-10-17T10:27:18.559Z] Copying: 293/1024 [MB] (17 MBps) [2024-10-17T10:27:19.500Z] Copying: 307/1024 [MB] (14 MBps) [2024-10-17T10:27:20.442Z] Copying: 324776/1048576 [kB] (9544 kBps) [2024-10-17T10:27:21.378Z] Copying: 328/1024 [MB] (11 MBps) [2024-10-17T10:27:22.762Z] Copying: 342/1024 [MB] (13 MBps) [2024-10-17T10:27:23.706Z] Copying: 354/1024 [MB] (12 MBps) [2024-10-17T10:27:24.707Z] Copying: 365/1024 [MB] (10 MBps) [2024-10-17T10:27:25.648Z] Copying: 379/1024 [MB] (14 MBps) [2024-10-17T10:27:26.588Z] Copying: 390/1024 [MB] (11 MBps) [2024-10-17T10:27:27.521Z] Copying: 402/1024 [MB] (11 MBps) [2024-10-17T10:27:28.463Z] Copying: 413/1024 [MB] (11 MBps) [2024-10-17T10:27:29.401Z] Copying: 429/1024 [MB] (15 MBps) [2024-10-17T10:27:30.777Z] Copying: 440/1024 [MB] (10 MBps) [2024-10-17T10:27:31.717Z] Copying: 452/1024 [MB] (11 MBps) [2024-10-17T10:27:32.663Z] Copying: 469/1024 [MB] (17 MBps) [2024-10-17T10:27:33.671Z] Copying: 480/1024 [MB] (10 MBps) [2024-10-17T10:27:34.615Z] Copying: 492/1024 [MB] (12 MBps) [2024-10-17T10:27:35.560Z] Copying: 514232/1048576 [kB] (10040 kBps) [2024-10-17T10:27:36.603Z] Copying: 514/1024 [MB] (12 MBps) [2024-10-17T10:27:37.547Z] Copying: 530/1024 [MB] (16 MBps) [2024-10-17T10:27:38.484Z] Copying: 541/1024 [MB] (11 MBps) [2024-10-17T10:27:39.424Z] Copying: 553/1024 [MB] (12 MBps) [2024-10-17T10:27:40.367Z] Copying: 563/1024 [MB] (10 MBps) [2024-10-17T10:27:41.752Z] Copying: 574/1024 [MB] (10 MBps) [2024-10-17T10:27:42.691Z] Copying: 585/1024 [MB] (10 MBps) [2024-10-17T10:27:43.628Z] Copying: 601/1024 [MB] (16 MBps) [2024-10-17T10:27:44.572Z] Copying: 620/1024 [MB] (18 MBps) [2024-10-17T10:27:45.511Z] Copying: 630/1024 [MB] (10 MBps) [2024-10-17T10:27:46.453Z] Copying: 644/1024 [MB] (13 MBps) [2024-10-17T10:27:47.388Z] Copying: 664/1024 [MB] (20 MBps) [2024-10-17T10:27:48.761Z] Copying: 679/1024 [MB] (14 MBps) [2024-10-17T10:27:49.697Z] Copying: 695/1024 [MB] (16 MBps) [2024-10-17T10:27:50.639Z] Copying: 720/1024 [MB] (25 MBps) [2024-10-17T10:27:51.585Z] Copying: 734/1024 [MB] (13 MBps) [2024-10-17T10:27:52.554Z] Copying: 747/1024 [MB] (13 MBps) [2024-10-17T10:27:53.497Z] Copying: 757/1024 [MB] (10 MBps) [2024-10-17T10:27:54.440Z] Copying: 768/1024 [MB] (10 MBps) [2024-10-17T10:27:55.379Z] Copying: 778/1024 [MB] (10 MBps) [2024-10-17T10:27:56.758Z] Copying: 792/1024 [MB] (13 MBps) [2024-10-17T10:27:57.699Z] Copying: 811/1024 [MB] (19 MBps) [2024-10-17T10:27:58.639Z] Copying: 837/1024 [MB] (25 MBps) [2024-10-17T10:27:59.581Z] Copying: 867/1024 [MB] (30 MBps) [2024-10-17T10:28:00.525Z] Copying: 890/1024 [MB] (23 MBps) [2024-10-17T10:28:01.468Z] Copying: 912/1024 [MB] (22 MBps) [2024-10-17T10:28:02.408Z] Copying: 939/1024 [MB] (26 MBps) [2024-10-17T10:28:03.790Z] Copying: 966/1024 [MB] (27 MBps) [2024-10-17T10:28:04.050Z] Copying: 1007/1024 [MB] (41 MBps) [2024-10-17T10:28:04.050Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-17 10:28:03.933535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.933625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:00.959 [2024-10-17 10:28:03.933646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:00.959 [2024-10-17 10:28:03.933659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.933691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:00.959 [2024-10-17 10:28:03.937678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.937715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:00.959 [2024-10-17 10:28:03.937730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.966 ms 00:29:00.959 [2024-10-17 10:28:03.937743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.938850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.938876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:00.959 [2024-10-17 10:28:03.938890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:29:00.959 [2024-10-17 10:28:03.938901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.942901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.942928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:00.959 [2024-10-17 10:28:03.942939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:29:00.959 [2024-10-17 10:28:03.942947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.949073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.949102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:00.959 [2024-10-17 10:28:03.949112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:29:00.959 [2024-10-17 10:28:03.949120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.974032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.974074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:00.959 [2024-10-17 10:28:03.974084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.857 ms 00:29:00.959 [2024-10-17 10:28:03.974092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.988818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.988845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:00.959 [2024-10-17 10:28:03.988857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.707 ms 00:29:00.959 [2024-10-17 10:28:03.988865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:03.992698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:03.992726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:00.959 [2024-10-17 10:28:03.992741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.798 ms 00:29:00.959 [2024-10-17 10:28:03.992749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:04.016017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:04.016045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:00.959 [2024-10-17 10:28:04.016062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.254 ms 00:29:00.959 [2024-10-17 10:28:04.016069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.959 [2024-10-17 10:28:04.038958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.959 [2024-10-17 10:28:04.038993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:00.959 [2024-10-17 10:28:04.039003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:29:00.959 [2024-10-17 10:28:04.039009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.222 [2024-10-17 10:28:04.062170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.222 [2024-10-17 10:28:04.062199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:01.222 [2024-10-17 10:28:04.062209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.142 ms 00:29:01.222 [2024-10-17 10:28:04.062216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.222 [2024-10-17 10:28:04.084864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.222 [2024-10-17 10:28:04.084888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:01.222 [2024-10-17 10:28:04.084898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.608 ms 00:29:01.222 [2024-10-17 10:28:04.084906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.222 [2024-10-17 10:28:04.084923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:01.222 [2024-10-17 10:28:04.084936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:01.222 [2024-10-17 10:28:04.084945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:01.222 [2024-10-17 10:28:04.084953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.084999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:01.222 [2024-10-17 10:28:04.085372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:01.223 [2024-10-17 10:28:04.085703] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:01.223 [2024-10-17 10:28:04.085715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 420ae77d-e8d2-4328-83d9-23d3af78b30f 00:29:01.223 [2024-10-17 10:28:04.085722] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:01.223 [2024-10-17 10:28:04.085732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:01.223 [2024-10-17 10:28:04.085739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:01.223 [2024-10-17 10:28:04.085746] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:01.223 [2024-10-17 10:28:04.085753] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:01.223 [2024-10-17 10:28:04.085760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:01.223 [2024-10-17 10:28:04.085774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:01.223 [2024-10-17 10:28:04.085781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:01.223 [2024-10-17 10:28:04.085787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:01.223 [2024-10-17 10:28:04.085794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.223 [2024-10-17 10:28:04.085803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:01.223 [2024-10-17 10:28:04.085812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:29:01.223 [2024-10-17 10:28:04.085820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.098556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.223 [2024-10-17 10:28:04.098580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:01.223 [2024-10-17 10:28:04.098591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:29:01.223 [2024-10-17 10:28:04.098599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.098984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.223 [2024-10-17 10:28:04.098999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:01.223 [2024-10-17 10:28:04.099008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:29:01.223 [2024-10-17 10:28:04.099019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.134287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.134316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:01.223 [2024-10-17 10:28:04.134327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.134336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.134391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.134400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:01.223 [2024-10-17 10:28:04.134409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.134422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.134474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.134485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:01.223 [2024-10-17 10:28:04.134494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.134503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.134521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.134530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:01.223 [2024-10-17 10:28:04.134539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.134547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.217738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.217785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:01.223 [2024-10-17 10:28:04.217799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.217807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.286591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.286648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:01.223 [2024-10-17 10:28:04.286661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.286671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.286802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.286814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:01.223 [2024-10-17 10:28:04.286824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.286833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.286871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.286880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:01.223 [2024-10-17 10:28:04.286890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.286898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.287006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.287016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:01.223 [2024-10-17 10:28:04.287026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.287034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.287114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.287126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:01.223 [2024-10-17 10:28:04.287135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.287144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.223 [2024-10-17 10:28:04.287189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.223 [2024-10-17 10:28:04.287202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:01.223 [2024-10-17 10:28:04.287213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.223 [2024-10-17 10:28:04.287221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.224 [2024-10-17 10:28:04.287272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.224 [2024-10-17 10:28:04.287283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:01.224 [2024-10-17 10:28:04.287292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.224 [2024-10-17 10:28:04.287301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.224 [2024-10-17 10:28:04.287447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.877 ms, result 0 00:29:02.164 00:29:02.164 00:29:02.164 10:28:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:04.705 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77673 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77673 ']' 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 77673 00:29:04.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77673) - No such process 00:29:04.705 Process with pid 77673 is not found 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 77673 is not found' 00:29:04.705 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:04.967 Remove shared memory files 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:04.967 00:29:04.967 real 4m46.698s 00:29:04.967 user 5m13.693s 00:29:04.967 sys 0m26.809s 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.967 ************************************ 00:29:04.967 10:28:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.967 END TEST ftl_dirty_shutdown 00:29:04.967 ************************************ 00:29:04.967 10:28:07 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:04.967 10:28:07 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:04.967 10:28:07 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.967 10:28:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:04.967 ************************************ 00:29:04.967 START TEST ftl_upgrade_shutdown 00:29:04.967 ************************************ 00:29:04.967 10:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:04.967 * Looking for test storage... 00:29:04.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:04.967 10:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:04.967 10:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:04.967 10:28:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:04.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.967 --rc genhtml_branch_coverage=1 00:29:04.967 --rc genhtml_function_coverage=1 00:29:04.967 --rc genhtml_legend=1 00:29:04.967 --rc geninfo_all_blocks=1 00:29:04.967 --rc geninfo_unexecuted_blocks=1 00:29:04.967 00:29:04.967 ' 00:29:04.967 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:04.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.967 --rc genhtml_branch_coverage=1 00:29:04.967 --rc genhtml_function_coverage=1 00:29:04.967 --rc genhtml_legend=1 00:29:04.967 --rc geninfo_all_blocks=1 00:29:04.967 --rc geninfo_unexecuted_blocks=1 00:29:04.967 00:29:04.968 ' 00:29:04.968 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:04.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.968 --rc genhtml_branch_coverage=1 00:29:04.968 --rc genhtml_function_coverage=1 00:29:04.968 --rc genhtml_legend=1 00:29:04.968 --rc geninfo_all_blocks=1 00:29:04.968 --rc geninfo_unexecuted_blocks=1 00:29:04.968 00:29:04.968 ' 00:29:04.968 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:04.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.968 --rc genhtml_branch_coverage=1 00:29:04.968 --rc genhtml_function_coverage=1 00:29:04.968 --rc genhtml_legend=1 00:29:04.968 --rc geninfo_all_blocks=1 00:29:04.968 --rc geninfo_unexecuted_blocks=1 00:29:04.968 00:29:04.968 ' 00:29:04.968 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:04.968 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:04.968 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:05.229 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80745 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80745 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80745 ']' 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.230 10:28:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:05.230 [2024-10-17 10:28:08.151436] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:05.230 [2024-10-17 10:28:08.151559] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80745 ] 00:29:05.230 [2024-10-17 10:28:08.300824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.491 [2024-10-17 10:28:08.409683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:06.061 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:06.321 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:06.582 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:06.582 { 00:29:06.582 "name": "basen1", 00:29:06.582 "aliases": [ 00:29:06.582 "9806b95c-7c47-4fe3-9b56-4374280fae4d" 00:29:06.582 ], 00:29:06.582 "product_name": "NVMe disk", 00:29:06.582 "block_size": 4096, 00:29:06.582 "num_blocks": 1310720, 00:29:06.582 "uuid": "9806b95c-7c47-4fe3-9b56-4374280fae4d", 00:29:06.582 "numa_id": -1, 00:29:06.582 "assigned_rate_limits": { 00:29:06.582 "rw_ios_per_sec": 0, 00:29:06.582 "rw_mbytes_per_sec": 0, 00:29:06.582 "r_mbytes_per_sec": 0, 00:29:06.582 "w_mbytes_per_sec": 0 00:29:06.582 }, 00:29:06.582 "claimed": true, 00:29:06.582 "claim_type": "read_many_write_one", 00:29:06.582 "zoned": false, 00:29:06.582 "supported_io_types": { 00:29:06.582 "read": true, 00:29:06.582 "write": true, 00:29:06.582 "unmap": true, 00:29:06.582 "flush": true, 00:29:06.582 "reset": true, 00:29:06.582 "nvme_admin": true, 00:29:06.582 "nvme_io": true, 00:29:06.582 "nvme_io_md": false, 00:29:06.582 "write_zeroes": true, 00:29:06.582 "zcopy": false, 00:29:06.582 "get_zone_info": false, 00:29:06.582 "zone_management": false, 00:29:06.582 "zone_append": false, 00:29:06.582 "compare": true, 00:29:06.582 "compare_and_write": false, 00:29:06.582 "abort": true, 00:29:06.582 "seek_hole": false, 00:29:06.582 "seek_data": false, 00:29:06.582 "copy": true, 00:29:06.582 "nvme_iov_md": false 00:29:06.582 }, 00:29:06.582 "driver_specific": { 00:29:06.582 "nvme": [ 00:29:06.582 { 00:29:06.582 "pci_address": "0000:00:11.0", 00:29:06.582 "trid": { 00:29:06.582 "trtype": "PCIe", 00:29:06.582 "traddr": "0000:00:11.0" 00:29:06.582 }, 00:29:06.582 "ctrlr_data": { 00:29:06.582 "cntlid": 0, 00:29:06.582 "vendor_id": "0x1b36", 00:29:06.582 "model_number": "QEMU NVMe Ctrl", 00:29:06.582 "serial_number": "12341", 00:29:06.582 "firmware_revision": "8.0.0", 00:29:06.582 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:06.582 "oacs": { 00:29:06.582 "security": 0, 00:29:06.582 "format": 1, 00:29:06.582 "firmware": 0, 00:29:06.582 "ns_manage": 1 00:29:06.582 }, 00:29:06.582 "multi_ctrlr": false, 00:29:06.582 "ana_reporting": false 00:29:06.582 }, 00:29:06.582 "vs": { 00:29:06.582 "nvme_version": "1.4" 00:29:06.582 }, 00:29:06.582 "ns_data": { 00:29:06.582 "id": 1, 00:29:06.582 "can_share": false 00:29:06.582 } 00:29:06.582 } 00:29:06.582 ], 00:29:06.582 "mp_policy": "active_passive" 00:29:06.582 } 00:29:06.582 } 00:29:06.583 ]' 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:06.583 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:06.844 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0765e629-0a8f-44f0-9f67-5df86e49f823 00:29:06.844 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:06.844 10:28:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0765e629-0a8f-44f0-9f67-5df86e49f823 00:29:07.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5c2d444e-fbb5-4f8b-ba46-da0079cc0c76 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5c2d444e-fbb5-4f8b-ba46-da0079cc0c76 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a0b4f498-6d18-4918-af2c-c604db915886 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a0b4f498-6d18-4918-af2c-c604db915886 ]] 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a0b4f498-6d18-4918-af2c-c604db915886 5120 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a0b4f498-6d18-4918-af2c-c604db915886 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a0b4f498-6d18-4918-af2c-c604db915886 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a0b4f498-6d18-4918-af2c-c604db915886 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:07.366 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a0b4f498-6d18-4918-af2c-c604db915886 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:07.639 { 00:29:07.639 "name": "a0b4f498-6d18-4918-af2c-c604db915886", 00:29:07.639 "aliases": [ 00:29:07.639 "lvs/basen1p0" 00:29:07.639 ], 00:29:07.639 "product_name": "Logical Volume", 00:29:07.639 "block_size": 4096, 00:29:07.639 "num_blocks": 5242880, 00:29:07.639 "uuid": "a0b4f498-6d18-4918-af2c-c604db915886", 00:29:07.639 "assigned_rate_limits": { 00:29:07.639 "rw_ios_per_sec": 0, 00:29:07.639 "rw_mbytes_per_sec": 0, 00:29:07.639 "r_mbytes_per_sec": 0, 00:29:07.639 "w_mbytes_per_sec": 0 00:29:07.639 }, 00:29:07.639 "claimed": false, 00:29:07.639 "zoned": false, 00:29:07.639 "supported_io_types": { 00:29:07.639 "read": true, 00:29:07.639 "write": true, 00:29:07.639 "unmap": true, 00:29:07.639 "flush": false, 00:29:07.639 "reset": true, 00:29:07.639 "nvme_admin": false, 00:29:07.639 "nvme_io": false, 00:29:07.639 "nvme_io_md": false, 00:29:07.639 "write_zeroes": true, 00:29:07.639 "zcopy": false, 00:29:07.639 "get_zone_info": false, 00:29:07.639 "zone_management": false, 00:29:07.639 "zone_append": false, 00:29:07.639 "compare": false, 00:29:07.639 "compare_and_write": false, 00:29:07.639 "abort": false, 00:29:07.639 "seek_hole": true, 00:29:07.639 "seek_data": true, 00:29:07.639 "copy": false, 00:29:07.639 "nvme_iov_md": false 00:29:07.639 }, 00:29:07.639 "driver_specific": { 00:29:07.639 "lvol": { 00:29:07.639 "lvol_store_uuid": "5c2d444e-fbb5-4f8b-ba46-da0079cc0c76", 00:29:07.639 "base_bdev": "basen1", 00:29:07.639 "thin_provision": true, 00:29:07.639 "num_allocated_clusters": 0, 00:29:07.639 "snapshot": false, 00:29:07.639 "clone": false, 00:29:07.639 "esnap_clone": false 00:29:07.639 } 00:29:07.639 } 00:29:07.639 } 00:29:07.639 ]' 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:07.639 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:07.925 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:07.925 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:07.925 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:08.183 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:08.183 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:08.183 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a0b4f498-6d18-4918-af2c-c604db915886 -c cachen1p0 --l2p_dram_limit 2 00:29:08.445 [2024-10-17 10:28:11.339315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.339369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:08.445 [2024-10-17 10:28:11.339383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:08.445 [2024-10-17 10:28:11.339391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.339429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.339440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:08.445 [2024-10-17 10:28:11.339448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:08.445 [2024-10-17 10:28:11.339454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.339472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:08.445 [2024-10-17 10:28:11.340280] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:08.445 [2024-10-17 10:28:11.340317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.340325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:08.445 [2024-10-17 10:28:11.340335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:29:08.445 [2024-10-17 10:28:11.340342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.340414] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 31f79b9b-0c07-4bc8-a6cf-edcdeabf24e9 00:29:08.445 [2024-10-17 10:28:11.341765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.341800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:08.445 [2024-10-17 10:28:11.341809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:08.445 [2024-10-17 10:28:11.341818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.348993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.349024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:08.445 [2024-10-17 10:28:11.349033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.134 ms 00:29:08.445 [2024-10-17 10:28:11.349041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.349091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.349101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:08.445 [2024-10-17 10:28:11.349108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:08.445 [2024-10-17 10:28:11.349121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.349158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.349169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:08.445 [2024-10-17 10:28:11.349176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:08.445 [2024-10-17 10:28:11.349185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.349201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:08.445 [2024-10-17 10:28:11.352494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.352520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:08.445 [2024-10-17 10:28:11.352529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.296 ms 00:29:08.445 [2024-10-17 10:28:11.352539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.352563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.352571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:08.445 [2024-10-17 10:28:11.352579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:08.445 [2024-10-17 10:28:11.352586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.352606] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:08.445 [2024-10-17 10:28:11.352715] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:08.445 [2024-10-17 10:28:11.352729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:08.445 [2024-10-17 10:28:11.352739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:08.445 [2024-10-17 10:28:11.352748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:08.445 [2024-10-17 10:28:11.352755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:08.445 [2024-10-17 10:28:11.352763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:08.445 [2024-10-17 10:28:11.352769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:08.445 [2024-10-17 10:28:11.352778] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:08.445 [2024-10-17 10:28:11.352783] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:08.445 [2024-10-17 10:28:11.352790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.352798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:08.445 [2024-10-17 10:28:11.352806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:29:08.445 [2024-10-17 10:28:11.352812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.352878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.445 [2024-10-17 10:28:11.352885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:08.445 [2024-10-17 10:28:11.352893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:08.445 [2024-10-17 10:28:11.352904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.445 [2024-10-17 10:28:11.352980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:08.445 [2024-10-17 10:28:11.352994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:08.445 [2024-10-17 10:28:11.353005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:08.445 [2024-10-17 10:28:11.353011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.445 [2024-10-17 10:28:11.353019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:08.445 [2024-10-17 10:28:11.353025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:08.445 [2024-10-17 10:28:11.353032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:08.445 [2024-10-17 10:28:11.353038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:08.445 [2024-10-17 10:28:11.353055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:08.445 [2024-10-17 10:28:11.353061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.445 [2024-10-17 10:28:11.353068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:08.445 [2024-10-17 10:28:11.353075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:08.446 [2024-10-17 10:28:11.353081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:08.446 [2024-10-17 10:28:11.353095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:08.446 [2024-10-17 10:28:11.353103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:08.446 [2024-10-17 10:28:11.353118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:08.446 [2024-10-17 10:28:11.353125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:08.446 [2024-10-17 10:28:11.353138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:08.446 [2024-10-17 10:28:11.353157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:08.446 [2024-10-17 10:28:11.353176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:08.446 [2024-10-17 10:28:11.353195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:08.446 [2024-10-17 10:28:11.353215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:08.446 [2024-10-17 10:28:11.353232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:08.446 [2024-10-17 10:28:11.353250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:08.446 [2024-10-17 10:28:11.353268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:08.446 [2024-10-17 10:28:11.353274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:08.446 [2024-10-17 10:28:11.353286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:08.446 [2024-10-17 10:28:11.353292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.446 [2024-10-17 10:28:11.353307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:08.446 [2024-10-17 10:28:11.353317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:08.446 [2024-10-17 10:28:11.353322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:08.446 [2024-10-17 10:28:11.353329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:08.446 [2024-10-17 10:28:11.353334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:08.446 [2024-10-17 10:28:11.353342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:08.446 [2024-10-17 10:28:11.353351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:08.446 [2024-10-17 10:28:11.353360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:08.446 [2024-10-17 10:28:11.353375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:08.446 [2024-10-17 10:28:11.353394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:08.446 [2024-10-17 10:28:11.353400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:08.446 [2024-10-17 10:28:11.353406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:08.446 [2024-10-17 10:28:11.353412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:08.446 [2024-10-17 10:28:11.353458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:08.446 [2024-10-17 10:28:11.353466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:08.446 [2024-10-17 10:28:11.353482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:08.446 [2024-10-17 10:28:11.353488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:08.446 [2024-10-17 10:28:11.353496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:08.446 [2024-10-17 10:28:11.353502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.446 [2024-10-17 10:28:11.353510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:08.446 [2024-10-17 10:28:11.353516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.577 ms 00:29:08.446 [2024-10-17 10:28:11.353523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.446 [2024-10-17 10:28:11.353574] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:08.446 [2024-10-17 10:28:11.353592] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:10.992 [2024-10-17 10:28:14.011593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.011661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:10.992 [2024-10-17 10:28:14.011677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2658.008 ms 00:29:10.992 [2024-10-17 10:28:14.011688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.040391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.040446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:10.992 [2024-10-17 10:28:14.040460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.486 ms 00:29:10.992 [2024-10-17 10:28:14.040470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.040551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.040565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:10.992 [2024-10-17 10:28:14.040574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:10.992 [2024-10-17 10:28:14.040587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.074037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.074089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:10.992 [2024-10-17 10:28:14.074102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.416 ms 00:29:10.992 [2024-10-17 10:28:14.074113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.074143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.074156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:10.992 [2024-10-17 10:28:14.074165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:10.992 [2024-10-17 10:28:14.074178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.074637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.074666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:10.992 [2024-10-17 10:28:14.074676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:29:10.992 [2024-10-17 10:28:14.074686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.992 [2024-10-17 10:28:14.074736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.992 [2024-10-17 10:28:14.074747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:10.992 [2024-10-17 10:28:14.074755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:10.992 [2024-10-17 10:28:14.074768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.090385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.090421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:11.253 [2024-10-17 10:28:14.090431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.596 ms 00:29:11.253 [2024-10-17 10:28:14.090443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.102661] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:11.253 [2024-10-17 10:28:14.103671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.103701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:11.253 [2024-10-17 10:28:14.103714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.154 ms 00:29:11.253 [2024-10-17 10:28:14.103722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.136609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.136651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:11.253 [2024-10-17 10:28:14.136668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.859 ms 00:29:11.253 [2024-10-17 10:28:14.136676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.136771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.136782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:11.253 [2024-10-17 10:28:14.136795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:11.253 [2024-10-17 10:28:14.136806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.160055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.160088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:11.253 [2024-10-17 10:28:14.160101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.195 ms 00:29:11.253 [2024-10-17 10:28:14.160110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.182248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.182280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:11.253 [2024-10-17 10:28:14.182293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.100 ms 00:29:11.253 [2024-10-17 10:28:14.182301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.182878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.182899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:11.253 [2024-10-17 10:28:14.182910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:29:11.253 [2024-10-17 10:28:14.182919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.264928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.264972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:11.253 [2024-10-17 10:28:14.264991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.972 ms 00:29:11.253 [2024-10-17 10:28:14.264999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.290578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.290632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:11.253 [2024-10-17 10:28:14.290661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.491 ms 00:29:11.253 [2024-10-17 10:28:14.290670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.316805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.316848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:11.253 [2024-10-17 10:28:14.316862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.089 ms 00:29:11.253 [2024-10-17 10:28:14.316870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.253 [2024-10-17 10:28:14.340464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.253 [2024-10-17 10:28:14.340502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:11.253 [2024-10-17 10:28:14.340521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.548 ms 00:29:11.253 [2024-10-17 10:28:14.340529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.254 [2024-10-17 10:28:14.340574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.254 [2024-10-17 10:28:14.340584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:11.254 [2024-10-17 10:28:14.340597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:11.254 [2024-10-17 10:28:14.340605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.254 [2024-10-17 10:28:14.340690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.254 [2024-10-17 10:28:14.340701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:11.254 [2024-10-17 10:28:14.340712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:11.254 [2024-10-17 10:28:14.340719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.254 [2024-10-17 10:28:14.341733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3001.938 ms, result 0 00:29:11.514 { 00:29:11.514 "name": "ftl", 00:29:11.514 "uuid": "31f79b9b-0c07-4bc8-a6cf-edcdeabf24e9" 00:29:11.514 } 00:29:11.514 10:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:11.514 [2024-10-17 10:28:14.552979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.514 10:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:11.775 10:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:12.037 [2024-10-17 10:28:14.949408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:12.037 10:28:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:12.298 [2024-10-17 10:28:15.158468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:12.298 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:12.560 Fill FTL, iteration 1 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80857 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80857 /var/tmp/spdk.tgt.sock 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80857 ']' 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.560 10:28:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.560 [2024-10-17 10:28:15.585098] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:12.560 [2024-10-17 10:28:15.585217] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80857 ] 00:29:12.821 [2024-10-17 10:28:15.735250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.821 [2024-10-17 10:28:15.829328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.394 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.394 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:13.394 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:13.655 ftln1 00:29:13.655 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:13.656 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80857 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80857 ']' 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80857 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80857 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:13.915 killing process with pid 80857 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80857' 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80857 00:29:13.915 10:28:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80857 00:29:15.286 10:28:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:15.286 10:28:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:15.286 [2024-10-17 10:28:18.153784] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:15.286 [2024-10-17 10:28:18.153905] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80900 ] 00:29:15.286 [2024-10-17 10:28:18.301108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.544 [2024-10-17 10:28:18.379304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.930  [2024-10-17T10:28:20.969Z] Copying: 246/1024 [MB] (246 MBps) [2024-10-17T10:28:21.910Z] Copying: 469/1024 [MB] (223 MBps) [2024-10-17T10:28:22.852Z] Copying: 695/1024 [MB] (226 MBps) [2024-10-17T10:28:23.113Z] Copying: 952/1024 [MB] (257 MBps) [2024-10-17T10:28:23.685Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:29:20.595 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:20.595 Calculate MD5 checksum, iteration 1 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.595 10:28:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.595 [2024-10-17 10:28:23.614757] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:20.595 [2024-10-17 10:28:23.614877] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80959 ] 00:29:20.856 [2024-10-17 10:28:23.764524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.856 [2024-10-17 10:28:23.845735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.239  [2024-10-17T10:28:25.900Z] Copying: 643/1024 [MB] (643 MBps) [2024-10-17T10:28:26.477Z] Copying: 1024/1024 [MB] (average 635 MBps) 00:29:23.386 00:29:23.386 10:28:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:23.386 10:28:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:25.305 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:25.305 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=14ff52dbcbcf9c397d381e35919246c8 00:29:25.305 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:25.561 Fill FTL, iteration 2 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:25.561 10:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:25.561 [2024-10-17 10:28:28.459976] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:25.561 [2024-10-17 10:28:28.460104] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81015 ] 00:29:25.561 [2024-10-17 10:28:28.607759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.818 [2024-10-17 10:28:28.681892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.196  [2024-10-17T10:28:31.230Z] Copying: 92/1024 [MB] (92 MBps) [2024-10-17T10:28:32.173Z] Copying: 348/1024 [MB] (256 MBps) [2024-10-17T10:28:33.116Z] Copying: 603/1024 [MB] (255 MBps) [2024-10-17T10:28:34.054Z] Copying: 849/1024 [MB] (246 MBps) [2024-10-17T10:28:34.312Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:29:31.221 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:31.221 Calculate MD5 checksum, iteration 2 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:31.221 10:28:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:31.554 [2024-10-17 10:28:34.326162] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:31.554 [2024-10-17 10:28:34.326283] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81074 ] 00:29:31.554 [2024-10-17 10:28:34.473246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.554 [2024-10-17 10:28:34.548142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.936  [2024-10-17T10:28:36.598Z] Copying: 656/1024 [MB] (656 MBps) [2024-10-17T10:28:37.540Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:29:34.449 00:29:34.449 10:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:34.449 10:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:36.349 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:36.349 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f986d2f19c1a91202c13bc1be01f2b53 00:29:36.349 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:36.349 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:36.349 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:36.607 [2024-10-17 10:28:39.610723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.607 [2024-10-17 10:28:39.610779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:36.607 [2024-10-17 10:28:39.610793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:36.607 [2024-10-17 10:28:39.610800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.607 [2024-10-17 10:28:39.610820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.607 [2024-10-17 10:28:39.610828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:36.607 [2024-10-17 10:28:39.610834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:36.607 [2024-10-17 10:28:39.610841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.607 [2024-10-17 10:28:39.610860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:36.607 [2024-10-17 10:28:39.610868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:36.607 [2024-10-17 10:28:39.610875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:36.607 [2024-10-17 10:28:39.610881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:36.607 [2024-10-17 10:28:39.610935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:29:36.607 true 00:29:36.607 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:36.866 { 00:29:36.866 "name": "ftl", 00:29:36.866 "properties": [ 00:29:36.866 { 00:29:36.866 "name": "superblock_version", 00:29:36.866 "value": 5, 00:29:36.866 "read-only": true 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "name": "base_device", 00:29:36.866 "bands": [ 00:29:36.866 { 00:29:36.866 "id": 0, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 1, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 2, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 3, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 4, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 5, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 6, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 7, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 8, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 9, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 10, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 11, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 12, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 13, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 14, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 15, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 16, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 17, 00:29:36.866 "state": "FREE", 00:29:36.866 "validity": 0.0 00:29:36.866 } 00:29:36.866 ], 00:29:36.866 "read-only": true 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "name": "cache_device", 00:29:36.866 "type": "bdev", 00:29:36.866 "chunks": [ 00:29:36.866 { 00:29:36.866 "id": 0, 00:29:36.866 "state": "INACTIVE", 00:29:36.866 "utilization": 0.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 1, 00:29:36.866 "state": "CLOSED", 00:29:36.866 "utilization": 1.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 2, 00:29:36.866 "state": "CLOSED", 00:29:36.866 "utilization": 1.0 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 3, 00:29:36.866 "state": "OPEN", 00:29:36.866 "utilization": 0.001953125 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "id": 4, 00:29:36.866 "state": "OPEN", 00:29:36.866 "utilization": 0.0 00:29:36.866 } 00:29:36.866 ], 00:29:36.866 "read-only": true 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "name": "verbose_mode", 00:29:36.866 "value": true, 00:29:36.866 "unit": "", 00:29:36.866 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:36.866 }, 00:29:36.866 { 00:29:36.866 "name": "prep_upgrade_on_shutdown", 00:29:36.866 "value": false, 00:29:36.866 "unit": "", 00:29:36.866 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:36.866 } 00:29:36.866 ] 00:29:36.866 } 00:29:36.866 10:28:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:37.125 [2024-10-17 10:28:40.019017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.125 [2024-10-17 10:28:40.019071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:37.125 [2024-10-17 10:28:40.019081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:37.125 [2024-10-17 10:28:40.019088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.125 [2024-10-17 10:28:40.019104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.125 [2024-10-17 10:28:40.019111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:37.125 [2024-10-17 10:28:40.019118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:37.125 [2024-10-17 10:28:40.019123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.125 [2024-10-17 10:28:40.019139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.125 [2024-10-17 10:28:40.019145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:37.125 [2024-10-17 10:28:40.019152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:37.125 [2024-10-17 10:28:40.019158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.125 [2024-10-17 10:28:40.019201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:29:37.125 true 00:29:37.125 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:37.125 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:37.125 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:37.383 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:37.383 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:37.383 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:37.383 [2024-10-17 10:28:40.435332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.383 [2024-10-17 10:28:40.435378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:37.383 [2024-10-17 10:28:40.435387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:37.383 [2024-10-17 10:28:40.435393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.383 [2024-10-17 10:28:40.435410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.383 [2024-10-17 10:28:40.435418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:37.383 [2024-10-17 10:28:40.435424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:37.383 [2024-10-17 10:28:40.435430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.383 [2024-10-17 10:28:40.435445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.383 [2024-10-17 10:28:40.435451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:37.383 [2024-10-17 10:28:40.435458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:37.383 [2024-10-17 10:28:40.435464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.383 [2024-10-17 10:28:40.435507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:29:37.383 true 00:29:37.383 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:37.642 { 00:29:37.642 "name": "ftl", 00:29:37.642 "properties": [ 00:29:37.642 { 00:29:37.642 "name": "superblock_version", 00:29:37.642 "value": 5, 00:29:37.642 "read-only": true 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "name": "base_device", 00:29:37.642 "bands": [ 00:29:37.642 { 00:29:37.642 "id": 0, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 1, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 2, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 3, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 4, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 5, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 6, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 7, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 8, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 9, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 10, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 11, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 12, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 13, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 14, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 15, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 16, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 17, 00:29:37.642 "state": "FREE", 00:29:37.642 "validity": 0.0 00:29:37.642 } 00:29:37.642 ], 00:29:37.642 "read-only": true 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "name": "cache_device", 00:29:37.642 "type": "bdev", 00:29:37.642 "chunks": [ 00:29:37.642 { 00:29:37.642 "id": 0, 00:29:37.642 "state": "INACTIVE", 00:29:37.642 "utilization": 0.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 1, 00:29:37.642 "state": "CLOSED", 00:29:37.642 "utilization": 1.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 2, 00:29:37.642 "state": "CLOSED", 00:29:37.642 "utilization": 1.0 00:29:37.642 }, 00:29:37.642 { 00:29:37.642 "id": 3, 00:29:37.643 "state": "OPEN", 00:29:37.643 "utilization": 0.001953125 00:29:37.643 }, 00:29:37.643 { 00:29:37.643 "id": 4, 00:29:37.643 "state": "OPEN", 00:29:37.643 "utilization": 0.0 00:29:37.643 } 00:29:37.643 ], 00:29:37.643 "read-only": true 00:29:37.643 }, 00:29:37.643 { 00:29:37.643 "name": "verbose_mode", 00:29:37.643 "value": true, 00:29:37.643 "unit": "", 00:29:37.643 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:37.643 }, 00:29:37.643 { 00:29:37.643 "name": "prep_upgrade_on_shutdown", 00:29:37.643 "value": true, 00:29:37.643 "unit": "", 00:29:37.643 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:37.643 } 00:29:37.643 ] 00:29:37.643 } 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80745 ]] 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80745 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80745 ']' 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80745 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80745 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.643 killing process with pid 80745 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80745' 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80745 00:29:37.643 10:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80745 00:29:38.210 [2024-10-17 10:28:41.273399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:38.210 [2024-10-17 10:28:41.286415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.210 [2024-10-17 10:28:41.286470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:38.210 [2024-10-17 10:28:41.286483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:38.210 [2024-10-17 10:28:41.286490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.210 [2024-10-17 10:28:41.286510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:38.210 [2024-10-17 10:28:41.288694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.210 [2024-10-17 10:28:41.288720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:38.210 [2024-10-17 10:28:41.288729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.173 ms 00:29:38.210 [2024-10-17 10:28:41.288737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.539756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.539829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:48.268 [2024-10-17 10:28:49.539842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8250.971 ms 00:29:48.268 [2024-10-17 10:28:49.539850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.540913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.540939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:48.268 [2024-10-17 10:28:49.540947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.050 ms 00:29:48.268 [2024-10-17 10:28:49.540953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.541831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.541853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:48.268 [2024-10-17 10:28:49.541861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:29:48.268 [2024-10-17 10:28:49.541868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.549855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.549887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:48.268 [2024-10-17 10:28:49.549895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.947 ms 00:29:48.268 [2024-10-17 10:28:49.549902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.555643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.555671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:48.268 [2024-10-17 10:28:49.555680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.714 ms 00:29:48.268 [2024-10-17 10:28:49.555687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.268 [2024-10-17 10:28:49.555761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.268 [2024-10-17 10:28:49.555771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:48.269 [2024-10-17 10:28:49.555779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:48.269 [2024-10-17 10:28:49.555786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.563151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.563177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:48.269 [2024-10-17 10:28:49.563184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.348 ms 00:29:48.269 [2024-10-17 10:28:49.563191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.570381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.570414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:48.269 [2024-10-17 10:28:49.570421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.164 ms 00:29:48.269 [2024-10-17 10:28:49.570427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.577722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.577750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:48.269 [2024-10-17 10:28:49.577757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.269 ms 00:29:48.269 [2024-10-17 10:28:49.577763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.585086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.585114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:48.269 [2024-10-17 10:28:49.585121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.272 ms 00:29:48.269 [2024-10-17 10:28:49.585126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.585151] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:48.269 [2024-10-17 10:28:49.585163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:48.269 [2024-10-17 10:28:49.585173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:48.269 [2024-10-17 10:28:49.585186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:48.269 [2024-10-17 10:28:49.585194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:48.269 [2024-10-17 10:28:49.585288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:48.269 [2024-10-17 10:28:49.585294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 31f79b9b-0c07-4bc8-a6cf-edcdeabf24e9 00:29:48.269 [2024-10-17 10:28:49.585300] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:48.269 [2024-10-17 10:28:49.585306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:48.269 [2024-10-17 10:28:49.585311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:48.269 [2024-10-17 10:28:49.585318] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:48.269 [2024-10-17 10:28:49.585324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:48.269 [2024-10-17 10:28:49.585330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:48.269 [2024-10-17 10:28:49.585336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:48.269 [2024-10-17 10:28:49.585341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:48.269 [2024-10-17 10:28:49.585346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:48.269 [2024-10-17 10:28:49.585351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.585360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:48.269 [2024-10-17 10:28:49.585367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:29:48.269 [2024-10-17 10:28:49.585376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.595505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.595530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:48.269 [2024-10-17 10:28:49.595539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.109 ms 00:29:48.269 [2024-10-17 10:28:49.595546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.595835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.269 [2024-10-17 10:28:49.595849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:48.269 [2024-10-17 10:28:49.595856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:29:48.269 [2024-10-17 10:28:49.595862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.630646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.630674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:48.269 [2024-10-17 10:28:49.630682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.630689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.630716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.630723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:48.269 [2024-10-17 10:28:49.630729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.630735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.630797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.630806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:48.269 [2024-10-17 10:28:49.630813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.630819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.630832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.630842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:48.269 [2024-10-17 10:28:49.630848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.630854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.694718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.694757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:48.269 [2024-10-17 10:28:49.694768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.694775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:48.269 [2024-10-17 10:28:49.746445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:48.269 [2024-10-17 10:28:49.746534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:48.269 [2024-10-17 10:28:49.746606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:48.269 [2024-10-17 10:28:49.746705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:48.269 [2024-10-17 10:28:49.746756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:48.269 [2024-10-17 10:28:49.746814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:48.269 [2024-10-17 10:28:49.746872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:48.269 [2024-10-17 10:28:49.746881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:48.269 [2024-10-17 10:28:49.746889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.269 [2024-10-17 10:28:49.746995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8460.531 ms, result 0 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81275 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81275 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81275 ']' 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.530 10:28:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.789 [2024-10-17 10:28:51.625192] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:48.789 [2024-10-17 10:28:51.625317] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81275 ] 00:29:48.789 [2024-10-17 10:28:51.773706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.789 [2024-10-17 10:28:51.871983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.725 [2024-10-17 10:28:52.506435] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:49.725 [2024-10-17 10:28:52.506498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:49.725 [2024-10-17 10:28:52.655455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.655505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:49.725 [2024-10-17 10:28:52.655518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:49.725 [2024-10-17 10:28:52.655525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.655570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.655580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:49.725 [2024-10-17 10:28:52.655587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:49.725 [2024-10-17 10:28:52.655592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.655611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:49.725 [2024-10-17 10:28:52.656220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:49.725 [2024-10-17 10:28:52.656241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.656248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:49.725 [2024-10-17 10:28:52.656255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:29:49.725 [2024-10-17 10:28:52.656262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.657565] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:49.725 [2024-10-17 10:28:52.668432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.668463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:49.725 [2024-10-17 10:28:52.668473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.868 ms 00:29:49.725 [2024-10-17 10:28:52.668484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.668533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.668541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:49.725 [2024-10-17 10:28:52.668548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:49.725 [2024-10-17 10:28:52.668554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.674855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.674882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:49.725 [2024-10-17 10:28:52.674894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.256 ms 00:29:49.725 [2024-10-17 10:28:52.674900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.674946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.674955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:49.725 [2024-10-17 10:28:52.674962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:49.725 [2024-10-17 10:28:52.674968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.675018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.675027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:49.725 [2024-10-17 10:28:52.675034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:49.725 [2024-10-17 10:28:52.675042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.675072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:49.725 [2024-10-17 10:28:52.678256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.678284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:49.725 [2024-10-17 10:28:52.678292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.188 ms 00:29:49.725 [2024-10-17 10:28:52.678298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.678325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.678332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:49.725 [2024-10-17 10:28:52.678339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:49.725 [2024-10-17 10:28:52.678345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.678362] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:49.725 [2024-10-17 10:28:52.678394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:49.725 [2024-10-17 10:28:52.678426] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:49.725 [2024-10-17 10:28:52.678439] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:49.725 [2024-10-17 10:28:52.678526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:49.725 [2024-10-17 10:28:52.678536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:49.725 [2024-10-17 10:28:52.678544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:49.725 [2024-10-17 10:28:52.678552] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:49.725 [2024-10-17 10:28:52.678560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:49.725 [2024-10-17 10:28:52.678566] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:49.725 [2024-10-17 10:28:52.678574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:49.725 [2024-10-17 10:28:52.678580] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:49.725 [2024-10-17 10:28:52.678586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:49.725 [2024-10-17 10:28:52.678593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.678598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:49.725 [2024-10-17 10:28:52.678605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:29:49.725 [2024-10-17 10:28:52.678611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.678678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.725 [2024-10-17 10:28:52.678685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:49.725 [2024-10-17 10:28:52.678691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:49.725 [2024-10-17 10:28:52.678698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.725 [2024-10-17 10:28:52.678780] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:49.725 [2024-10-17 10:28:52.678788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:49.725 [2024-10-17 10:28:52.678795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.725 [2024-10-17 10:28:52.678801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:49.725 [2024-10-17 10:28:52.678814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:49.725 [2024-10-17 10:28:52.678829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:49.725 [2024-10-17 10:28:52.678835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:49.725 [2024-10-17 10:28:52.678841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:49.725 [2024-10-17 10:28:52.678853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:49.725 [2024-10-17 10:28:52.678858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:49.725 [2024-10-17 10:28:52.678871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:49.725 [2024-10-17 10:28:52.678876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:49.725 [2024-10-17 10:28:52.678888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:49.725 [2024-10-17 10:28:52.678893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.725 [2024-10-17 10:28:52.678900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:49.725 [2024-10-17 10:28:52.678906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:49.725 [2024-10-17 10:28:52.678911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.725 [2024-10-17 10:28:52.678917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:49.725 [2024-10-17 10:28:52.678922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:49.726 [2024-10-17 10:28:52.678928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.726 [2024-10-17 10:28:52.678938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:49.726 [2024-10-17 10:28:52.678943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:49.726 [2024-10-17 10:28:52.678949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.726 [2024-10-17 10:28:52.678954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:49.726 [2024-10-17 10:28:52.678960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:49.726 [2024-10-17 10:28:52.678965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:49.726 [2024-10-17 10:28:52.678970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:49.726 [2024-10-17 10:28:52.678976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:49.726 [2024-10-17 10:28:52.678981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.726 [2024-10-17 10:28:52.678987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:49.726 [2024-10-17 10:28:52.678993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:49.726 [2024-10-17 10:28:52.678998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.726 [2024-10-17 10:28:52.679003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:49.726 [2024-10-17 10:28:52.679008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:49.726 [2024-10-17 10:28:52.679015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.726 [2024-10-17 10:28:52.679021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:49.726 [2024-10-17 10:28:52.679026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:49.726 [2024-10-17 10:28:52.679031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.726 [2024-10-17 10:28:52.679036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:49.726 [2024-10-17 10:28:52.679043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:49.726 [2024-10-17 10:28:52.679062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:49.726 [2024-10-17 10:28:52.679068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:49.726 [2024-10-17 10:28:52.679074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:49.726 [2024-10-17 10:28:52.679080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:49.726 [2024-10-17 10:28:52.679086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:49.726 [2024-10-17 10:28:52.679091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:49.726 [2024-10-17 10:28:52.679096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:49.726 [2024-10-17 10:28:52.679103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:49.726 [2024-10-17 10:28:52.679110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:49.726 [2024-10-17 10:28:52.679120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:49.726 [2024-10-17 10:28:52.679134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:49.726 [2024-10-17 10:28:52.679152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:49.726 [2024-10-17 10:28:52.679158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:49.726 [2024-10-17 10:28:52.679165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:49.726 [2024-10-17 10:28:52.679171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:49.726 [2024-10-17 10:28:52.679211] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:49.726 [2024-10-17 10:28:52.679218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.726 [2024-10-17 10:28:52.679232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:49.726 [2024-10-17 10:28:52.679237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:49.726 [2024-10-17 10:28:52.679243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:49.726 [2024-10-17 10:28:52.679249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.726 [2024-10-17 10:28:52.679255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:49.726 [2024-10-17 10:28:52.679261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:29:49.726 [2024-10-17 10:28:52.679268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.726 [2024-10-17 10:28:52.679313] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:49.726 [2024-10-17 10:28:52.679326] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:53.926 [2024-10-17 10:28:56.756668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.756739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:53.926 [2024-10-17 10:28:56.756756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4077.339 ms 00:29:53.926 [2024-10-17 10:28:56.756765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.785104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.785149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:53.926 [2024-10-17 10:28:56.785163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.072 ms 00:29:53.926 [2024-10-17 10:28:56.785171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.785250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.785261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:53.926 [2024-10-17 10:28:56.785275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:53.926 [2024-10-17 10:28:56.785283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.817869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.817907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:53.926 [2024-10-17 10:28:56.817918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.534 ms 00:29:53.926 [2024-10-17 10:28:56.817926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.817958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.817968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:53.926 [2024-10-17 10:28:56.817977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:53.926 [2024-10-17 10:28:56.817984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.818438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.818533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:53.926 [2024-10-17 10:28:56.818547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:29:53.926 [2024-10-17 10:28:56.818555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.818598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.818611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:53.926 [2024-10-17 10:28:56.818619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:53.926 [2024-10-17 10:28:56.818627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.834402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.834436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:53.926 [2024-10-17 10:28:56.834447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.753 ms 00:29:53.926 [2024-10-17 10:28:56.834455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.847226] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:53.926 [2024-10-17 10:28:56.847263] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:53.926 [2024-10-17 10:28:56.847276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.847284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:53.926 [2024-10-17 10:28:56.847293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.719 ms 00:29:53.926 [2024-10-17 10:28:56.847301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.860830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.860863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:53.926 [2024-10-17 10:28:56.860874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.489 ms 00:29:53.926 [2024-10-17 10:28:56.860882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.871875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.871905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:53.926 [2024-10-17 10:28:56.871915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.953 ms 00:29:53.926 [2024-10-17 10:28:56.871922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.883217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.883246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:53.926 [2024-10-17 10:28:56.883256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.260 ms 00:29:53.926 [2024-10-17 10:28:56.883263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.883875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.883942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:53.926 [2024-10-17 10:28:56.883955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:29:53.926 [2024-10-17 10:28:56.883962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.960023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.960257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:53.926 [2024-10-17 10:28:56.960284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.042 ms 00:29:53.926 [2024-10-17 10:28:56.960294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.970965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:53.926 [2024-10-17 10:28:56.971757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.971893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:53.926 [2024-10-17 10:28:56.971911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.188 ms 00:29:53.926 [2024-10-17 10:28:56.971921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.972005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.972017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:53.926 [2024-10-17 10:28:56.972029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:53.926 [2024-10-17 10:28:56.972037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.972107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.972119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:53.926 [2024-10-17 10:28:56.972129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:53.926 [2024-10-17 10:28:56.972137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.972158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.972167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:53.926 [2024-10-17 10:28:56.972175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:53.926 [2024-10-17 10:28:56.972183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.972221] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:53.926 [2024-10-17 10:28:56.972231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.972239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:53.926 [2024-10-17 10:28:56.972247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:53.926 [2024-10-17 10:28:56.972255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.995309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.995343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:53.926 [2024-10-17 10:28:56.995359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.034 ms 00:29:53.926 [2024-10-17 10:28:56.995367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.995440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.926 [2024-10-17 10:28:56.995451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:53.926 [2024-10-17 10:28:56.995459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:53.926 [2024-10-17 10:28:56.995467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.926 [2024-10-17 10:28:56.996603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4340.669 ms, result 0 00:29:53.926 [2024-10-17 10:28:57.011664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.188 [2024-10-17 10:28:57.027655] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:54.188 [2024-10-17 10:28:57.035787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:54.759 [2024-10-17 10:28:57.764403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.759 [2024-10-17 10:28:57.764443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:54.759 [2024-10-17 10:28:57.764456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:54.759 [2024-10-17 10:28:57.764464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.759 [2024-10-17 10:28:57.764489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.759 [2024-10-17 10:28:57.764498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:54.759 [2024-10-17 10:28:57.764506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:54.759 [2024-10-17 10:28:57.764514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.759 [2024-10-17 10:28:57.764533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.759 [2024-10-17 10:28:57.764541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:54.759 [2024-10-17 10:28:57.764550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:54.759 [2024-10-17 10:28:57.764557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.759 [2024-10-17 10:28:57.764613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.198 ms, result 0 00:29:54.759 true 00:29:54.759 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:55.020 { 00:29:55.020 "name": "ftl", 00:29:55.020 "properties": [ 00:29:55.020 { 00:29:55.020 "name": "superblock_version", 00:29:55.020 "value": 5, 00:29:55.020 "read-only": true 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "name": "base_device", 00:29:55.020 "bands": [ 00:29:55.020 { 00:29:55.020 "id": 0, 00:29:55.020 "state": "CLOSED", 00:29:55.020 "validity": 1.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 1, 00:29:55.020 "state": "CLOSED", 00:29:55.020 "validity": 1.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 2, 00:29:55.020 "state": "CLOSED", 00:29:55.020 "validity": 0.007843137254901933 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 3, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 4, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 5, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 6, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 7, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 8, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 9, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 10, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 11, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 12, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 13, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 14, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 15, 00:29:55.020 "state": "FREE", 00:29:55.020 "validity": 0.0 00:29:55.020 }, 00:29:55.020 { 00:29:55.020 "id": 16, 00:29:55.020 "state": "FREE", 00:29:55.021 "validity": 0.0 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "id": 17, 00:29:55.021 "state": "FREE", 00:29:55.021 "validity": 0.0 00:29:55.021 } 00:29:55.021 ], 00:29:55.021 "read-only": true 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "name": "cache_device", 00:29:55.021 "type": "bdev", 00:29:55.021 "chunks": [ 00:29:55.021 { 00:29:55.021 "id": 0, 00:29:55.021 "state": "INACTIVE", 00:29:55.021 "utilization": 0.0 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "id": 1, 00:29:55.021 "state": "OPEN", 00:29:55.021 "utilization": 0.0 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "id": 2, 00:29:55.021 "state": "OPEN", 00:29:55.021 "utilization": 0.0 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "id": 3, 00:29:55.021 "state": "FREE", 00:29:55.021 "utilization": 0.0 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "id": 4, 00:29:55.021 "state": "FREE", 00:29:55.021 "utilization": 0.0 00:29:55.021 } 00:29:55.021 ], 00:29:55.021 "read-only": true 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "name": "verbose_mode", 00:29:55.021 "value": true, 00:29:55.021 "unit": "", 00:29:55.021 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:55.021 }, 00:29:55.021 { 00:29:55.021 "name": "prep_upgrade_on_shutdown", 00:29:55.021 "value": false, 00:29:55.021 "unit": "", 00:29:55.021 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:55.021 } 00:29:55.021 ] 00:29:55.021 } 00:29:55.021 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:55.021 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:55.021 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:55.282 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:55.282 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:55.282 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:55.282 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:55.282 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:55.542 Validate MD5 checksum, iteration 1 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:55.542 10:28:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:55.542 [2024-10-17 10:28:58.446037] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:29:55.542 [2024-10-17 10:28:58.446347] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81368 ] 00:29:55.542 [2024-10-17 10:28:58.595688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.801 [2024-10-17 10:28:58.690152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.183  [2024-10-17T10:29:01.219Z] Copying: 615/1024 [MB] (615 MBps) [2024-10-17T10:29:06.510Z] Copying: 1024/1024 [MB] (average 589 MBps) 00:30:03.419 00:30:03.419 10:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:03.419 10:29:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=14ff52dbcbcf9c397d381e35919246c8 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 14ff52dbcbcf9c397d381e35919246c8 != \1\4\f\f\5\2\d\b\c\b\c\f\9\c\3\9\7\d\3\8\1\e\3\5\9\1\9\2\4\6\c\8 ]] 00:30:05.945 Validate MD5 checksum, iteration 2 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:05.945 10:29:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:05.945 [2024-10-17 10:29:08.662080] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:30:05.945 [2024-10-17 10:29:08.662339] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81474 ] 00:30:05.945 [2024-10-17 10:29:08.806590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.945 [2024-10-17 10:29:08.908726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.324  [2024-10-17T10:29:11.359Z] Copying: 629/1024 [MB] (629 MBps) [2024-10-17T10:29:12.297Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:30:09.206 00:30:09.206 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:09.206 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f986d2f19c1a91202c13bc1be01f2b53 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f986d2f19c1a91202c13bc1be01f2b53 != \f\9\8\6\d\2\f\1\9\c\1\a\9\1\2\0\2\c\1\3\b\c\1\b\e\0\1\f\2\b\5\3 ]] 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81275 ]] 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81275 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:11.107 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81534 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81534 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81534 ']' 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:11.108 10:29:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.108 [2024-10-17 10:29:13.783368] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:30:11.108 [2024-10-17 10:29:13.783606] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81534 ] 00:30:11.108 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81275 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:11.108 [2024-10-17 10:29:13.929234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.108 [2024-10-17 10:29:14.020955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.674 [2024-10-17 10:29:14.654878] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.674 [2024-10-17 10:29:14.655148] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.933 [2024-10-17 10:29:14.799685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.933 [2024-10-17 10:29:14.799825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:11.933 [2024-10-17 10:29:14.799881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:11.933 [2024-10-17 10:29:14.799900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.933 [2024-10-17 10:29:14.799962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.933 [2024-10-17 10:29:14.799984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:11.933 [2024-10-17 10:29:14.800000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:11.934 [2024-10-17 10:29:14.800015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.800059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:11.934 [2024-10-17 10:29:14.800690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:11.934 [2024-10-17 10:29:14.800706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.800712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:11.934 [2024-10-17 10:29:14.800720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.667 ms 00:30:11.934 [2024-10-17 10:29:14.800726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.800970] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:11.934 [2024-10-17 10:29:14.814854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.814972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:11.934 [2024-10-17 10:29:14.814987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.885 ms 00:30:11.934 [2024-10-17 10:29:14.814993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.822059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.822147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:11.934 [2024-10-17 10:29:14.822191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:11.934 [2024-10-17 10:29:14.822214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.822497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.822772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:11.934 [2024-10-17 10:29:14.822847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:11.934 [2024-10-17 10:29:14.822869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.822937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.822958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:11.934 [2024-10-17 10:29:14.823017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:11.934 [2024-10-17 10:29:14.823035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.823084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.823105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:11.934 [2024-10-17 10:29:14.823123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:11.934 [2024-10-17 10:29:14.823171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.823205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:11.934 [2024-10-17 10:29:14.825517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.825634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:11.934 [2024-10-17 10:29:14.825834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.317 ms 00:30:11.934 [2024-10-17 10:29:14.825852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.825888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.825909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:11.934 [2024-10-17 10:29:14.825925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.934 [2024-10-17 10:29:14.825939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.825965] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:11.934 [2024-10-17 10:29:14.825992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:11.934 [2024-10-17 10:29:14.826075] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:11.934 [2024-10-17 10:29:14.826163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:11.934 [2024-10-17 10:29:14.826327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:11.934 [2024-10-17 10:29:14.826355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:11.934 [2024-10-17 10:29:14.826380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:11.934 [2024-10-17 10:29:14.826403] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:11.934 [2024-10-17 10:29:14.826427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:11.934 [2024-10-17 10:29:14.826449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:11.934 [2024-10-17 10:29:14.826580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:11.934 [2024-10-17 10:29:14.826598] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:11.934 [2024-10-17 10:29:14.826612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:11.934 [2024-10-17 10:29:14.826626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.826640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:11.934 [2024-10-17 10:29:14.826658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.664 ms 00:30:11.934 [2024-10-17 10:29:14.826672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.826749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.934 [2024-10-17 10:29:14.826764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:11.934 [2024-10-17 10:29:14.826779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:11.934 [2024-10-17 10:29:14.826792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.934 [2024-10-17 10:29:14.826895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:11.934 [2024-10-17 10:29:14.826915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:11.934 [2024-10-17 10:29:14.826931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.934 [2024-10-17 10:29:14.826948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.826963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:11.934 [2024-10-17 10:29:14.826979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.826993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:11.934 [2024-10-17 10:29:14.827007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:11.934 [2024-10-17 10:29:14.827021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:11.934 [2024-10-17 10:29:14.827035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:11.934 [2024-10-17 10:29:14.827080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:11.934 [2024-10-17 10:29:14.827094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:11.934 [2024-10-17 10:29:14.827184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:11.934 [2024-10-17 10:29:14.827204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:11.934 [2024-10-17 10:29:14.827260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:11.934 [2024-10-17 10:29:14.827281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:11.934 [2024-10-17 10:29:14.827423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:11.934 [2024-10-17 10:29:14.827501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.934 [2024-10-17 10:29:14.827518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:11.934 [2024-10-17 10:29:14.827538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:11.934 [2024-10-17 10:29:14.827553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.934 [2024-10-17 10:29:14.827567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:11.934 [2024-10-17 10:29:14.827581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:11.934 [2024-10-17 10:29:14.827595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.934 [2024-10-17 10:29:14.827609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:11.934 [2024-10-17 10:29:14.827622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:11.934 [2024-10-17 10:29:14.827637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.934 [2024-10-17 10:29:14.827683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:11.934 [2024-10-17 10:29:14.827700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:11.934 [2024-10-17 10:29:14.827714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:11.934 [2024-10-17 10:29:14.827742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:11.934 [2024-10-17 10:29:14.827756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:11.934 [2024-10-17 10:29:14.827787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:11.934 [2024-10-17 10:29:14.827853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:11.934 [2024-10-17 10:29:14.827870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.934 [2024-10-17 10:29:14.827884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:11.934 [2024-10-17 10:29:14.827900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:11.934 [2024-10-17 10:29:14.827915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.935 [2024-10-17 10:29:14.827930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.935 [2024-10-17 10:29:14.827945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:11.935 [2024-10-17 10:29:14.827960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:11.935 [2024-10-17 10:29:14.827974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:11.935 [2024-10-17 10:29:14.827989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:11.935 [2024-10-17 10:29:14.828003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:11.935 [2024-10-17 10:29:14.828018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:11.935 [2024-10-17 10:29:14.828033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:11.935 [2024-10-17 10:29:14.828080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:11.935 [2024-10-17 10:29:14.828126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:11.935 [2024-10-17 10:29:14.828231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:11.935 [2024-10-17 10:29:14.828254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:11.935 [2024-10-17 10:29:14.828275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:11.935 [2024-10-17 10:29:14.828297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:11.935 [2024-10-17 10:29:14.828483] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:11.935 [2024-10-17 10:29:14.828507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.935 [2024-10-17 10:29:14.828551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:11.935 [2024-10-17 10:29:14.828574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:11.935 [2024-10-17 10:29:14.828596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:11.935 [2024-10-17 10:29:14.828650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.828667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:11.935 [2024-10-17 10:29:14.828686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.807 ms 00:30:11.935 [2024-10-17 10:29:14.828700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.850281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.850382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:11.935 [2024-10-17 10:29:14.850423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.526 ms 00:30:11.935 [2024-10-17 10:29:14.850441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.850481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.850497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:11.935 [2024-10-17 10:29:14.850513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:11.935 [2024-10-17 10:29:14.850528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.877264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.877362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:11.935 [2024-10-17 10:29:14.877401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.678 ms 00:30:11.935 [2024-10-17 10:29:14.877419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.877456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.877473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:11.935 [2024-10-17 10:29:14.877489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:11.935 [2024-10-17 10:29:14.877504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.877594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.877615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:11.935 [2024-10-17 10:29:14.877632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:11.935 [2024-10-17 10:29:14.877677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.877725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.877742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:11.935 [2024-10-17 10:29:14.877757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:11.935 [2024-10-17 10:29:14.877772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.891036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.891141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:11.935 [2024-10-17 10:29:14.891154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.238 ms 00:30:11.935 [2024-10-17 10:29:14.891160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.891243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.891252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:11.935 [2024-10-17 10:29:14.891259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:11.935 [2024-10-17 10:29:14.891265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.922105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.922225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:11.935 [2024-10-17 10:29:14.922273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.824 ms 00:30:11.935 [2024-10-17 10:29:14.922292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.929555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.929643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:11.935 [2024-10-17 10:29:14.929687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:30:11.935 [2024-10-17 10:29:14.929713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.977333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.977463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:11.935 [2024-10-17 10:29:14.977507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.563 ms 00:30:11.935 [2024-10-17 10:29:14.977530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.977669] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:11.935 [2024-10-17 10:29:14.977797] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:11.935 [2024-10-17 10:29:14.977959] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:11.935 [2024-10-17 10:29:14.978088] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:11.935 [2024-10-17 10:29:14.978149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.978190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:11.935 [2024-10-17 10:29:14.978208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.577 ms 00:30:11.935 [2024-10-17 10:29:14.978237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.978296] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:11.935 [2024-10-17 10:29:14.978399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.978417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:11.935 [2024-10-17 10:29:14.978433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:30:11.935 [2024-10-17 10:29:14.978453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.990245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.990342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:11.935 [2024-10-17 10:29:14.990385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.737 ms 00:30:11.935 [2024-10-17 10:29:14.990403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.996807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.996888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:11.935 [2024-10-17 10:29:14.996928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:11.935 [2024-10-17 10:29:14.996945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.935 [2024-10-17 10:29:14.997044] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:11.935 [2024-10-17 10:29:14.997235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.935 [2024-10-17 10:29:14.997405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:11.935 [2024-10-17 10:29:14.997437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:30:11.935 [2024-10-17 10:29:14.997453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.507 [2024-10-17 10:29:15.475837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.507 [2024-10-17 10:29:15.475888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:12.507 [2024-10-17 10:29:15.475903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 477.718 ms 00:30:12.507 [2024-10-17 10:29:15.475912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.507 [2024-10-17 10:29:15.479902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.507 [2024-10-17 10:29:15.480058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:12.507 [2024-10-17 10:29:15.480076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:30:12.507 [2024-10-17 10:29:15.480085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.507 [2024-10-17 10:29:15.480526] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:12.507 [2024-10-17 10:29:15.480562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.507 [2024-10-17 10:29:15.480570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:12.507 [2024-10-17 10:29:15.480580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:30:12.507 [2024-10-17 10:29:15.480588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.507 [2024-10-17 10:29:15.480616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.507 [2024-10-17 10:29:15.480626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:12.507 [2024-10-17 10:29:15.480634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:12.507 [2024-10-17 10:29:15.480642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.507 [2024-10-17 10:29:15.480675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 483.630 ms, result 0 00:30:12.507 [2024-10-17 10:29:15.480715] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:12.507 [2024-10-17 10:29:15.480878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.507 [2024-10-17 10:29:15.480888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:12.507 [2024-10-17 10:29:15.480896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:30:12.507 [2024-10-17 10:29:15.480904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.928908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.928948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:13.080 [2024-10-17 10:29:15.928960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 447.139 ms 00:30:13.080 [2024-10-17 10:29:15.928968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.932791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.932823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:13.080 [2024-10-17 10:29:15.932833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.965 ms 00:30:13.080 [2024-10-17 10:29:15.932840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.933116] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:13.080 [2024-10-17 10:29:15.933136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.933144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:13.080 [2024-10-17 10:29:15.933152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:30:13.080 [2024-10-17 10:29:15.933159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.933380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.933415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:13.080 [2024-10-17 10:29:15.933426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:13.080 [2024-10-17 10:29:15.933434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.933479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 452.757 ms, result 0 00:30:13.080 [2024-10-17 10:29:15.933518] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:13.080 [2024-10-17 10:29:15.933528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:13.080 [2024-10-17 10:29:15.933538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.933546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:13.080 [2024-10-17 10:29:15.933555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 936.512 ms 00:30:13.080 [2024-10-17 10:29:15.933563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.933593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.933601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:13.080 [2024-10-17 10:29:15.933609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:13.080 [2024-10-17 10:29:15.933619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.944770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:13.080 [2024-10-17 10:29:15.944871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.944882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:13.080 [2024-10-17 10:29:15.944891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.237 ms 00:30:13.080 [2024-10-17 10:29:15.944898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.945592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.945612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:13.080 [2024-10-17 10:29:15.945620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:30:13.080 [2024-10-17 10:29:15.945630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.947853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.947994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:13.080 [2024-10-17 10:29:15.948009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.207 ms 00:30:13.080 [2024-10-17 10:29:15.948017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.948069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.948079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:13.080 [2024-10-17 10:29:15.948088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:13.080 [2024-10-17 10:29:15.948095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.948201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.948211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:13.080 [2024-10-17 10:29:15.948219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:13.080 [2024-10-17 10:29:15.948227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.948246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.948255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:13.080 [2024-10-17 10:29:15.948263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:13.080 [2024-10-17 10:29:15.948270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.948297] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:13.080 [2024-10-17 10:29:15.948306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.948316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:13.080 [2024-10-17 10:29:15.948324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:13.080 [2024-10-17 10:29:15.948330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.948383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.080 [2024-10-17 10:29:15.948393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:13.080 [2024-10-17 10:29:15.948401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:13.080 [2024-10-17 10:29:15.948408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.080 [2024-10-17 10:29:15.949379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1149.237 ms, result 0 00:30:13.080 [2024-10-17 10:29:15.961742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.080 [2024-10-17 10:29:15.977732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:13.080 [2024-10-17 10:29:15.986101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:13.342 Validate MD5 checksum, iteration 1 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.342 10:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.342 [2024-10-17 10:29:16.358614] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:30:13.342 [2024-10-17 10:29:16.358902] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81564 ] 00:30:13.601 [2024-10-17 10:29:16.508269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.601 [2024-10-17 10:29:16.615903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.506  [2024-10-17T10:29:18.858Z] Copying: 631/1024 [MB] (631 MBps) [2024-10-17T10:29:20.234Z] Copying: 1024/1024 [MB] (average 591 MBps) 00:30:17.143 00:30:17.143 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:17.143 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=14ff52dbcbcf9c397d381e35919246c8 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 14ff52dbcbcf9c397d381e35919246c8 != \1\4\f\f\5\2\d\b\c\b\c\f\9\c\3\9\7\d\3\8\1\e\3\5\9\1\9\2\4\6\c\8 ]] 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:19.044 Validate MD5 checksum, iteration 2 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:19.044 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.044 [2024-10-17 10:29:22.008183] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:30:19.044 [2024-10-17 10:29:22.008299] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81630 ] 00:30:19.302 [2024-10-17 10:29:22.156968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.302 [2024-10-17 10:29:22.233488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.685  [2024-10-17T10:29:24.347Z] Copying: 657/1024 [MB] (657 MBps) [2024-10-17T10:29:28.556Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:30:25.465 00:30:25.465 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:25.465 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f986d2f19c1a91202c13bc1be01f2b53 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f986d2f19c1a91202c13bc1be01f2b53 != \f\9\8\6\d\2\f\1\9\c\1\a\9\1\2\0\2\c\1\3\b\c\1\b\e\0\1\f\2\b\5\3 ]] 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81534 ]] 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81534 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81534 ']' 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81534 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81534 00:30:27.362 killing process with pid 81534 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81534' 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81534 00:30:27.362 10:29:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81534 00:30:27.931 [2024-10-17 10:29:30.807554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:27.931 [2024-10-17 10:29:30.818396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.818431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:27.931 [2024-10-17 10:29:30.818443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.931 [2024-10-17 10:29:30.818450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.818468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:27.931 [2024-10-17 10:29:30.820601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.820625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:27.931 [2024-10-17 10:29:30.820634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.122 ms 00:30:27.931 [2024-10-17 10:29:30.820641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.820833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.820842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:27.931 [2024-10-17 10:29:30.820849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:30:27.931 [2024-10-17 10:29:30.820856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.822633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.822659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:27.931 [2024-10-17 10:29:30.822668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.765 ms 00:30:27.931 [2024-10-17 10:29:30.822675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.823546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.823570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:27.931 [2024-10-17 10:29:30.823578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:30:27.931 [2024-10-17 10:29:30.823584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.831879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.831905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:27.931 [2024-10-17 10:29:30.831914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.268 ms 00:30:27.931 [2024-10-17 10:29:30.831920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.836489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.836513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:27.931 [2024-10-17 10:29:30.836522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.537 ms 00:30:27.931 [2024-10-17 10:29:30.836529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.836587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.836595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:27.931 [2024-10-17 10:29:30.836602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:27.931 [2024-10-17 10:29:30.836607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.844625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.844778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:27.931 [2024-10-17 10:29:30.844791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.005 ms 00:30:27.931 [2024-10-17 10:29:30.844797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.852529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.852552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:27.931 [2024-10-17 10:29:30.852560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.708 ms 00:30:27.931 [2024-10-17 10:29:30.852565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.860193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.860216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:27.931 [2024-10-17 10:29:30.860223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.603 ms 00:30:27.931 [2024-10-17 10:29:30.860229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.867700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.867799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:27.931 [2024-10-17 10:29:30.867811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.424 ms 00:30:27.931 [2024-10-17 10:29:30.867816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.931 [2024-10-17 10:29:30.867840] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:27.931 [2024-10-17 10:29:30.867856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.931 [2024-10-17 10:29:30.867864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.931 [2024-10-17 10:29:30.867870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:27.931 [2024-10-17 10:29:30.867877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.931 [2024-10-17 10:29:30.867970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:27.931 [2024-10-17 10:29:30.867975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 31f79b9b-0c07-4bc8-a6cf-edcdeabf24e9 00:30:27.931 [2024-10-17 10:29:30.867982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:27.931 [2024-10-17 10:29:30.867987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:27.931 [2024-10-17 10:29:30.867993] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:27.931 [2024-10-17 10:29:30.867999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:27.931 [2024-10-17 10:29:30.868004] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:27.931 [2024-10-17 10:29:30.868010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:27.931 [2024-10-17 10:29:30.868016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:27.931 [2024-10-17 10:29:30.868021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:27.931 [2024-10-17 10:29:30.868027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:27.931 [2024-10-17 10:29:30.868033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.931 [2024-10-17 10:29:30.868041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:27.931 [2024-10-17 10:29:30.868062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:30:27.932 [2024-10-17 10:29:30.868069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.878158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.932 [2024-10-17 10:29:30.878246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:27.932 [2024-10-17 10:29:30.878258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.077 ms 00:30:27.932 [2024-10-17 10:29:30.878265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.878552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.932 [2024-10-17 10:29:30.878565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:27.932 [2024-10-17 10:29:30.878572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:30:27.932 [2024-10-17 10:29:30.878579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.913318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.932 [2024-10-17 10:29:30.913413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.932 [2024-10-17 10:29:30.913425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.932 [2024-10-17 10:29:30.913432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.913456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.932 [2024-10-17 10:29:30.913466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.932 [2024-10-17 10:29:30.913473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.932 [2024-10-17 10:29:30.913479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.913550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.932 [2024-10-17 10:29:30.913560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.932 [2024-10-17 10:29:30.913566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.932 [2024-10-17 10:29:30.913573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.913586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.932 [2024-10-17 10:29:30.913593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.932 [2024-10-17 10:29:30.913602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.932 [2024-10-17 10:29:30.913608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.932 [2024-10-17 10:29:30.976287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.932 [2024-10-17 10:29:30.976406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.932 [2024-10-17 10:29:30.976420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.932 [2024-10-17 10:29:30.976427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.190 [2024-10-17 10:29:31.027763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.190 [2024-10-17 10:29:31.027888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:28.190 [2024-10-17 10:29:31.027907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.190 [2024-10-17 10:29:31.027913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.190 [2024-10-17 10:29:31.027981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.190 [2024-10-17 10:29:31.027989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:28.190 [2024-10-17 10:29:31.027995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.190 [2024-10-17 10:29:31.028002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.190 [2024-10-17 10:29:31.028061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.190 [2024-10-17 10:29:31.028069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:28.190 [2024-10-17 10:29:31.028076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.191 [2024-10-17 10:29:31.028092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.191 [2024-10-17 10:29:31.028170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.191 [2024-10-17 10:29:31.028181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:28.191 [2024-10-17 10:29:31.028188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.191 [2024-10-17 10:29:31.028195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.191 [2024-10-17 10:29:31.028224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.191 [2024-10-17 10:29:31.028231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:28.191 [2024-10-17 10:29:31.028238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.191 [2024-10-17 10:29:31.028244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.191 [2024-10-17 10:29:31.028278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.191 [2024-10-17 10:29:31.028285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:28.191 [2024-10-17 10:29:31.028292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.191 [2024-10-17 10:29:31.028298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.191 [2024-10-17 10:29:31.028337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.191 [2024-10-17 10:29:31.028345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:28.191 [2024-10-17 10:29:31.028351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.191 [2024-10-17 10:29:31.028359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.191 [2024-10-17 10:29:31.028464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 210.043 ms, result 0 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:28.759 Remove shared memory files 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81275 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:28.759 ************************************ 00:30:28.759 END TEST ftl_upgrade_shutdown 00:30:28.759 ************************************ 00:30:28.759 00:30:28.759 real 1m23.838s 00:30:28.759 user 1m54.295s 00:30:28.759 sys 0m19.097s 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:28.759 10:29:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:28.759 Process with pid 72385 is not found 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@14 -- # killprocess 72385 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@950 -- # '[' -z 72385 ']' 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@954 -- # kill -0 72385 00:30:28.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72385) - No such process 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72385 is not found' 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:28.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81769 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81769 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@831 -- # '[' -z 81769 ']' 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.759 10:29:31 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.759 10:29:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:29.018 [2024-10-17 10:29:31.872817] Starting SPDK v25.01-pre git sha1 2a2bf59c2 / DPDK 24.03.0 initialization... 00:30:29.018 [2024-10-17 10:29:31.873550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81769 ] 00:30:29.018 [2024-10-17 10:29:32.021747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.276 [2024-10-17 10:29:32.110564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.842 10:29:32 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.842 10:29:32 ftl -- common/autotest_common.sh@864 -- # return 0 00:30:29.842 10:29:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:30.100 nvme0n1 00:30:30.100 10:29:32 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:30.100 10:29:32 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.100 10:29:32 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:30.358 10:29:33 ftl -- ftl/common.sh@28 -- # stores=5c2d444e-fbb5-4f8b-ba46-da0079cc0c76 00:30:30.358 10:29:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:30.358 10:29:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c2d444e-fbb5-4f8b-ba46-da0079cc0c76 00:30:30.358 10:29:33 ftl -- ftl/ftl.sh@23 -- # killprocess 81769 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 81769 ']' 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@954 -- # kill -0 81769 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@955 -- # uname 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81769 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:30.358 killing process with pid 81769 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81769' 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@969 -- # kill 81769 00:30:30.358 10:29:33 ftl -- common/autotest_common.sh@974 -- # wait 81769 00:30:31.734 10:29:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:31.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.993 Waiting for block devices as requested 00:30:31.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:31.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:31.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.253 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.540 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:37.540 Remove shared memory files 00:30:37.540 10:29:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:37.540 10:29:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:37.540 10:29:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:37.540 10:29:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:37.540 10:29:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:37.540 10:29:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:37.540 10:29:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:37.540 ************************************ 00:30:37.540 END TEST ftl 00:30:37.540 ************************************ 00:30:37.540 00:30:37.540 real 13m49.911s 00:30:37.540 user 16m14.018s 00:30:37.540 sys 1m8.066s 00:30:37.540 10:29:40 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.540 10:29:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:37.540 10:29:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:37.540 10:29:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:37.540 10:29:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:37.540 10:29:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:37.540 10:29:40 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:30:37.540 10:29:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:37.541 10:29:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:37.541 10:29:40 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:30:37.541 10:29:40 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:30:37.541 10:29:40 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:30:37.541 10:29:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:37.541 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:30:37.541 10:29:40 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:30:37.541 10:29:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:37.541 10:29:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:37.541 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:30:38.920 INFO: APP EXITING 00:30:38.920 INFO: killing all VMs 00:30:38.920 INFO: killing vhost app 00:30:38.920 INFO: EXIT DONE 00:30:39.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:39.754 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:39.754 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:39.754 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:39.754 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:40.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:40.276 Cleaning 00:30:40.276 Removing: /var/run/dpdk/spdk0/config 00:30:40.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:40.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:40.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:40.276 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:40.538 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:40.538 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:40.538 Removing: /var/run/dpdk/spdk0 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57007 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57204 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57416 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57509 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57543 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57666 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57684 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57872 00:30:40.538 Removing: /var/run/dpdk/spdk_pid57964 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58049 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58154 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58246 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58285 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58322 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58392 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58487 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58918 00:30:40.538 Removing: /var/run/dpdk/spdk_pid58971 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59031 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59039 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59141 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59152 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59248 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59264 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59317 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59335 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59388 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59406 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59561 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59597 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59686 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59858 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59942 00:30:40.538 Removing: /var/run/dpdk/spdk_pid59979 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60411 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60511 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60623 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60676 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60707 00:30:40.538 Removing: /var/run/dpdk/spdk_pid60786 00:30:40.538 Removing: /var/run/dpdk/spdk_pid61465 00:30:40.538 Removing: /var/run/dpdk/spdk_pid61507 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62006 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62104 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62219 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62272 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62303 00:30:40.538 Removing: /var/run/dpdk/spdk_pid62329 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64171 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64308 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64312 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64330 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64371 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64375 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64387 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64432 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64436 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64448 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64488 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64492 00:30:40.538 Removing: /var/run/dpdk/spdk_pid64504 00:30:40.538 Removing: /var/run/dpdk/spdk_pid65869 00:30:40.538 Removing: /var/run/dpdk/spdk_pid65966 00:30:40.538 Removing: /var/run/dpdk/spdk_pid67369 00:30:40.538 Removing: /var/run/dpdk/spdk_pid68739 00:30:40.538 Removing: /var/run/dpdk/spdk_pid68822 00:30:40.538 Removing: /var/run/dpdk/spdk_pid68906 00:30:40.538 Removing: /var/run/dpdk/spdk_pid68987 00:30:40.538 Removing: /var/run/dpdk/spdk_pid69092 00:30:40.538 Removing: /var/run/dpdk/spdk_pid69166 00:30:40.538 Removing: /var/run/dpdk/spdk_pid69308 00:30:40.538 Removing: /var/run/dpdk/spdk_pid69662 00:30:40.538 Removing: /var/run/dpdk/spdk_pid69693 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70135 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70320 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70414 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70526 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70573 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70599 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70908 00:30:40.538 Removing: /var/run/dpdk/spdk_pid70967 00:30:40.538 Removing: /var/run/dpdk/spdk_pid71042 00:30:40.852 Removing: /var/run/dpdk/spdk_pid71434 00:30:40.852 Removing: /var/run/dpdk/spdk_pid71580 00:30:40.852 Removing: /var/run/dpdk/spdk_pid72385 00:30:40.852 Removing: /var/run/dpdk/spdk_pid72517 00:30:40.852 Removing: /var/run/dpdk/spdk_pid72697 00:30:40.852 Removing: /var/run/dpdk/spdk_pid72783 00:30:40.852 Removing: /var/run/dpdk/spdk_pid73081 00:30:40.852 Removing: /var/run/dpdk/spdk_pid73331 00:30:40.852 Removing: /var/run/dpdk/spdk_pid73674 00:30:40.852 Removing: /var/run/dpdk/spdk_pid73872 00:30:40.852 Removing: /var/run/dpdk/spdk_pid73991 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74038 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74198 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74228 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74281 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74618 00:30:40.852 Removing: /var/run/dpdk/spdk_pid74854 00:30:40.852 Removing: /var/run/dpdk/spdk_pid75503 00:30:40.852 Removing: /var/run/dpdk/spdk_pid76453 00:30:40.852 Removing: /var/run/dpdk/spdk_pid77312 00:30:40.852 Removing: /var/run/dpdk/spdk_pid77673 00:30:40.852 Removing: /var/run/dpdk/spdk_pid77804 00:30:40.852 Removing: /var/run/dpdk/spdk_pid77891 00:30:40.852 Removing: /var/run/dpdk/spdk_pid78417 00:30:40.852 Removing: /var/run/dpdk/spdk_pid78471 00:30:40.852 Removing: /var/run/dpdk/spdk_pid79256 00:30:40.852 Removing: /var/run/dpdk/spdk_pid79944 00:30:40.852 Removing: /var/run/dpdk/spdk_pid80745 00:30:40.852 Removing: /var/run/dpdk/spdk_pid80857 00:30:40.852 Removing: /var/run/dpdk/spdk_pid80900 00:30:40.852 Removing: /var/run/dpdk/spdk_pid80959 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81015 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81074 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81275 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81368 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81474 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81534 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81564 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81630 00:30:40.852 Removing: /var/run/dpdk/spdk_pid81769 00:30:40.852 Clean 00:30:40.852 10:29:43 -- common/autotest_common.sh@1451 -- # return 0 00:30:40.852 10:29:43 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:30:40.852 10:29:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.852 10:29:43 -- common/autotest_common.sh@10 -- # set +x 00:30:40.852 10:29:43 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:30:40.852 10:29:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.852 10:29:43 -- common/autotest_common.sh@10 -- # set +x 00:30:40.852 10:29:43 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:40.852 10:29:43 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:41.114 10:29:43 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:41.114 10:29:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:30:41.114 10:29:43 -- spdk/autotest.sh@394 -- # hostname 00:30:41.114 10:29:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:41.114 geninfo: WARNING: invalid characters removed from testname! 00:31:07.703 10:30:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:09.617 10:30:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:12.921 10:30:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:15.466 10:30:18 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:18.013 10:30:20 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:20.543 10:30:23 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:23.085 10:30:25 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:23.085 10:30:25 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:31:23.085 10:30:25 -- common/autotest_common.sh@1691 -- $ lcov --version 00:31:23.085 10:30:25 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:31:23.085 10:30:25 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:31:23.085 10:30:25 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:31:23.085 10:30:25 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:31:23.085 10:30:25 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:31:23.085 10:30:25 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:23.085 10:30:25 -- scripts/common.sh@336 -- $ read -ra ver1 00:31:23.085 10:30:25 -- scripts/common.sh@337 -- $ IFS=.-: 00:31:23.085 10:30:25 -- scripts/common.sh@337 -- $ read -ra ver2 00:31:23.085 10:30:25 -- scripts/common.sh@338 -- $ local 'op=<' 00:31:23.085 10:30:25 -- scripts/common.sh@340 -- $ ver1_l=2 00:31:23.085 10:30:25 -- scripts/common.sh@341 -- $ ver2_l=1 00:31:23.085 10:30:25 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:31:23.085 10:30:25 -- scripts/common.sh@344 -- $ case "$op" in 00:31:23.085 10:30:25 -- scripts/common.sh@345 -- $ : 1 00:31:23.085 10:30:25 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:31:23.085 10:30:25 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.085 10:30:25 -- scripts/common.sh@365 -- $ decimal 1 00:31:23.085 10:30:25 -- scripts/common.sh@353 -- $ local d=1 00:31:23.085 10:30:25 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:23.085 10:30:25 -- scripts/common.sh@355 -- $ echo 1 00:31:23.085 10:30:25 -- scripts/common.sh@365 -- $ ver1[v]=1 00:31:23.085 10:30:25 -- scripts/common.sh@366 -- $ decimal 2 00:31:23.085 10:30:25 -- scripts/common.sh@353 -- $ local d=2 00:31:23.085 10:30:25 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:23.085 10:30:25 -- scripts/common.sh@355 -- $ echo 2 00:31:23.085 10:30:25 -- scripts/common.sh@366 -- $ ver2[v]=2 00:31:23.085 10:30:25 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:31:23.085 10:30:25 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:31:23.085 10:30:25 -- scripts/common.sh@368 -- $ return 0 00:31:23.085 10:30:25 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.085 10:30:25 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:31:23.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.085 --rc genhtml_branch_coverage=1 00:31:23.085 --rc genhtml_function_coverage=1 00:31:23.085 --rc genhtml_legend=1 00:31:23.085 --rc geninfo_all_blocks=1 00:31:23.085 --rc geninfo_unexecuted_blocks=1 00:31:23.085 00:31:23.085 ' 00:31:23.085 10:30:25 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:31:23.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.085 --rc genhtml_branch_coverage=1 00:31:23.085 --rc genhtml_function_coverage=1 00:31:23.085 --rc genhtml_legend=1 00:31:23.085 --rc geninfo_all_blocks=1 00:31:23.085 --rc geninfo_unexecuted_blocks=1 00:31:23.085 00:31:23.085 ' 00:31:23.085 10:30:25 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:31:23.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.085 --rc genhtml_branch_coverage=1 00:31:23.085 --rc genhtml_function_coverage=1 00:31:23.085 --rc genhtml_legend=1 00:31:23.085 --rc geninfo_all_blocks=1 00:31:23.085 --rc geninfo_unexecuted_blocks=1 00:31:23.085 00:31:23.085 ' 00:31:23.085 10:30:25 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:31:23.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.085 --rc genhtml_branch_coverage=1 00:31:23.085 --rc genhtml_function_coverage=1 00:31:23.085 --rc genhtml_legend=1 00:31:23.085 --rc geninfo_all_blocks=1 00:31:23.085 --rc geninfo_unexecuted_blocks=1 00:31:23.085 00:31:23.085 ' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:23.085 10:30:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:31:23.085 10:30:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:23.085 10:30:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.085 10:30:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.085 10:30:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.085 10:30:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.085 10:30:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.085 10:30:25 -- paths/export.sh@5 -- $ export PATH 00:31:23.085 10:30:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.085 10:30:25 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:23.085 10:30:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:31:23.085 10:30:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729161025.XXXXXX 00:31:23.085 10:30:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729161025.ldLUbt 00:31:23.085 10:30:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:31:23.085 10:30:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:31:23.085 10:30:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:31:23.085 10:30:25 -- common/autotest_common.sh@10 -- $ set +x 00:31:23.085 10:30:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:31:23.085 10:30:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:31:23.085 10:30:25 -- pm/common@17 -- $ local monitor 00:31:23.085 10:30:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.085 10:30:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.085 10:30:25 -- pm/common@25 -- $ sleep 1 00:31:23.085 10:30:25 -- pm/common@21 -- $ date +%s 00:31:23.085 10:30:25 -- pm/common@21 -- $ date +%s 00:31:23.085 10:30:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729161025 00:31:23.085 10:30:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729161025 00:31:23.085 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729161025_collect-cpu-load.pm.log 00:31:23.085 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729161025_collect-vmstat.pm.log 00:31:24.029 10:30:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:31:24.029 10:30:26 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:31:24.029 10:30:26 -- spdk/autopackage.sh@14 -- $ timing_finish 00:31:24.029 10:30:26 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:24.029 10:30:26 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:24.029 10:30:26 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:24.029 10:30:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:24.029 10:30:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:24.029 10:30:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:24.029 10:30:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.029 10:30:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:24.029 10:30:26 -- pm/common@44 -- $ pid=83479 00:31:24.029 10:30:26 -- pm/common@50 -- $ kill -TERM 83479 00:31:24.029 10:30:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.029 10:30:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:24.029 10:30:26 -- pm/common@44 -- $ pid=83480 00:31:24.029 10:30:26 -- pm/common@50 -- $ kill -TERM 83480 00:31:24.029 + [[ -n 5031 ]] 00:31:24.029 + sudo kill 5031 00:31:24.040 [Pipeline] } 00:31:24.056 [Pipeline] // timeout 00:31:24.061 [Pipeline] } 00:31:24.076 [Pipeline] // stage 00:31:24.081 [Pipeline] } 00:31:24.095 [Pipeline] // catchError 00:31:24.104 [Pipeline] stage 00:31:24.106 [Pipeline] { (Stop VM) 00:31:24.119 [Pipeline] sh 00:31:24.403 + vagrant halt 00:31:26.945 ==> default: Halting domain... 00:31:32.246 [Pipeline] sh 00:31:32.521 + vagrant destroy -f 00:31:35.057 ==> default: Removing domain... 00:31:35.332 [Pipeline] sh 00:31:35.620 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:31:35.631 [Pipeline] } 00:31:35.648 [Pipeline] // stage 00:31:35.653 [Pipeline] } 00:31:35.668 [Pipeline] // dir 00:31:35.673 [Pipeline] } 00:31:35.687 [Pipeline] // wrap 00:31:35.692 [Pipeline] } 00:31:35.705 [Pipeline] // catchError 00:31:35.713 [Pipeline] stage 00:31:35.716 [Pipeline] { (Epilogue) 00:31:35.729 [Pipeline] sh 00:31:36.018 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:41.311 [Pipeline] catchError 00:31:41.313 [Pipeline] { 00:31:41.328 [Pipeline] sh 00:31:41.615 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:41.616 Artifacts sizes are good 00:31:41.626 [Pipeline] } 00:31:41.640 [Pipeline] // catchError 00:31:41.650 [Pipeline] archiveArtifacts 00:31:41.657 Archiving artifacts 00:31:41.783 [Pipeline] cleanWs 00:31:41.796 [WS-CLEANUP] Deleting project workspace... 00:31:41.796 [WS-CLEANUP] Deferred wipeout is used... 00:31:41.803 [WS-CLEANUP] done 00:31:41.805 [Pipeline] } 00:31:41.821 [Pipeline] // stage 00:31:41.827 [Pipeline] } 00:31:41.842 [Pipeline] // node 00:31:41.847 [Pipeline] End of Pipeline 00:31:41.891 Finished: SUCCESS